Very happy to announce that I’ve been awarded the Fulbright-Schuman Innovation Grant to research Trans-Atlantic Approaches to Governing Sensor Privacy in Amsterdam and Pisa January 2018-June 2018.
More info here.
Very happy to announce that I’ve been awarded the Fulbright-Schuman Innovation Grant to research Trans-Atlantic Approaches to Governing Sensor Privacy in Amsterdam and Pisa January 2018-June 2018.
More info here.
Around a year ago, I submitted an amicus brief in the Fourth Circuit arguing that Wikimedia had standing to challenge NSA Upstream surveillance under the First Amendment. The brief was joined by Marc Blitz, Michael Froomkin, David Goldberger, James Grimmelmann, Lyrissa Lidsky, Neil Richards, and Katherine Strandburg– all scholars who have done groundbreaking work on the intersection of privacy and speech online.
Yesterday, the Fourth Circuit held that Wikimedia does, indeed, have standing to challenge Upstream surveillance. The panel explained “that Clapper’s analysis of speculative injury does not control this case, since the central allegations here are not speculative.” In other words, Wikimedia can show a strong-enough likelihood that it has been surveilled, and consequently, the challenge to NSA surveillance can move forward.
This is a big deal. In Clapper v. Amnesty International back in 2013, the Supreme Court held that plaintiffs could not challenge 702 national security surveillance because they could not show that a surveillance program even existed. Because that case relied on a “highly attenuated chain of possibilities,” and plaintiffs could not show that actual surveillance was “certainly impending,” they could not move forward with their challenge. The Fourth Circuit yesterday held that the Snowden (and other) disclosures about national security surveillance have changed the picture for plaintiffs like Wikimedia, allowing challenges under both the Fourth and First Amendments.
First Amendment challenges to surveillance have faced an uphill battle. The late Justice Scalia was particularly skeptical of intangible injuries, and waged a campaign against recognition of the chilling effect in both anonymous speech jurisprudence and challenges to surveillance programs. This Fourth Circuit decision is notable, by contrast, in its receptiveness to First Amendment surveillance harms.
On First Amendment standing to challenge surveillance, the panel pulled two cites/quotes from our brief. First, the panel explained that “[i]n First Amendment cases, the injury-in-fact element is commonly satisfied by a sufficient showing of self-censorship, which occurs when a claimant is chilled from exercising his right to free expression.” (Cooksey v. Futrell, 721 F.3d 226, 235 (4th Cir. 2013)). In other words, if you’re actually subject to surveillance, you can assert a First Amendment claim based on the resulting chilling of your speech. Second, the panel noted, as our brief both centrally argued and quoted, that “[t]he leniency of First Amendment standing manifests itself most commonly in the doctrine’s first element: injury-in-fact.” (Id.) In other words, our law generally makes it easier to get First Amendment cases into court, because speech is central to the functioning of our democracy. Third, the panel acknowledged the relationship between privacy and freedom of association: “[w]hen the government collects appellants’ metadata, appellants’ members’ interests in keeping their associations and contacts private are implicated, and any potential ‘chilling effect’ is created at that point.” ACLU v. Clapper, 785 F.3d 787, 802 (2d Cir. 2015). As we argued, a chilling effect on association can also give rise to standing.
Thanks to all those who worked on and edited this brief. Thanks also to Ron Collins over at First Amendment News for both covering yesterday’s decision and highlighting our work as best lower court amicus in 2016. And a huge thanks to the wonderful attorneys at the ACLU– in particular Patrick Toomey and Ashley Gorski– for their work on this and other important issues.
Yesterday we filed our amicus brief in Fields & Geraci v. City of Philadelphia, a Third Circuit case on whether there is a First Amendment right to record police officers performing their official duties in a public forum.
The brief is available here.
Amici urge this Court to recognize that the First Amendment covers audiovisual recording of public officials performing their duties in a public place. The First Amendment protects not only pure speech and expressive conduct, but also the corollary rights necessary for free expression and access rights necessary for the functioning of our democracy. Scholars unanimously conclude that the First Amendment protects a “right to record” public officials performing public duties in public locations.
I participated on a panel discussion of drones and privacy law at the Federal Trade Commission (FTC) last week.
Video here; our panel starts at 2:45, but the first panel and presentations are also worth watching.
More information about the event is available here.
I’m hosting a discussion today about self-driving cars with Bryant Walker Smith and Stephen Wu at the interdisciplinary OSU conference on Moral Algorithms: The Ethics of Autonomous Vehicles. Live stream and video should both be available.
My thoughts in Slate: What the Scarlett Johansson Robot Says about the Future.
…could lead to giving stronger property rights to celebrities in their images, with respect to robots. This would shift U.S. law by placing less of an emphasis on hard work, and more of an emphasis on threats to personhood and dignity. Or it could lead to the conclusion that people shouldn’t make robots that look like other real people, at all. Human slavery is often held up as the quintessential illustration of limits on property ownership, both inherently destructive of personhood and fundamentally immoral. What about robotic slavery, wherein the Scarlett Johansson robot feels for all purposes like the human actor you cannot legally enslave?
Inspired my my conversation with April Glaser of Wired, for this earlier article.
“There’s no doubt that as the robotics technology democratizes, we’ll see an increase in attempts to make your own personalized Kim Kardashian, for example,” says Ohio State University law professor Margot Kaminski. “And there’s also no doubt in my mind that this will have a gendered component. Siri’s a woman, Cortana’s a woman; if robots exist to perform labor or personal assistances, there’s a darn good chance they’ll be women.”
Toni Massaro and Helen Norton have written a fascinating essay with a first-rate title: Siri-ously? Free Speech Rights for Artificial Intelligence. I had the pleasure of leading a discussion of the piece at We Robot 2016. From the abstract:
….speaker human-ness no longer may be a logically essential part of the First Amendment calculus. We further observe, however, that free speech theory and doctrine provide a basis for regulating, as well as protecting, the speech of nonhuman speakers to serve the interests of their human listeners should strong AI ever evolve to this point. Finally, we note that the futurist implications we describe are possible, but not inevitable. Indeed, contemplating these outcomes for AI speech may inspire rethinking of the free speech theory and doctrine that makes them plausible.
On November 19th, I testified before the House Subcommittee on Commerce, Manufacturing, and Trade on “The Fast-Evolving Uses and Economic Impacts of Drones.” I discussed the impact of drones on privacy, and the First Amendment issues raised by drones. I closed by suggesting adoption of technology-neutral, FIPS-based federal data privacy law.
My Witness Testimony is available here, and the video is available below.
I’ve been working on a two-part project of trying to figure out how the U.S. will address image capture technology: how will it balance privacy with speech?
The first part is now a paper, forthcoming soon in Washington Law Review: Regulating Real-World Surveillance. The abstract:
A number of laws govern information gathering, or surveillance, by private parties in the physical world. But we lack a compelling theory of privacy harm that accounts for the state’s interest in enacting these laws. Without a theory of privacy harm, these laws will be enacted piecemeal. Legislators will have a difficult time justifying the laws to constituents; the laws will not be adequately tailored to legislative interest; and courts will find it challenging to weigh privacy harms against other strong values, such as freedom of expression.
This Article identifies the government interest in enacting laws governing surveillance by private parties. Using social psychologist Irwin Altman’s framework of “boundary management” as a jumping-off point, I conceptualize privacy harm as interference in an individual’s ability to dynamically manage disclosure and social boundaries. Stemming from this understanding of privacy, the government has two related interests in enacting laws prohibiting surveillance: an interest in providing notice so that an individual can adjust her behavior; and an interest in prohibiting surveillance to prevent undesirable behavioral shifts.
Framing the government interest, or interests, this way has several advantages. First, it descriptively maps on to existing laws: These laws either help individuals manage their desired level of disclosure by requiring notice, or prevent individuals from resorting to undesirable behavioral shifts by banning surveillance. Second, the framework helps us assess the strength and legitimacy of the legislative interest in these laws. Third, it allows courts to understand how First Amendment interests are in fact internalized in privacy laws. And fourth, it provides guidance to legislators for the enactment of new laws governing a range of new surveillance technologies — from automated license plate readers (ALPRs) to robots to drones.