Paul M. Schwartz and Karl-Nikolaus Peifer、TRANSATLANTIC DATA PRIVACY LAW(跨大西洋数据隐私法)、Georgetown Law Journal (November, 2017)
International flows of personal information are more significant than ever, but differences in transatlantic data privacy law imperil this data trade. The resulting policy debate has led the EU to set strict limits on transfers of personal data to any non- EU country--including the United States--that lacks sufficient privacy protections. Bridging the transatlantic data divide is therefore a matter of the greatest significance. In exploring this issue, this Article analyzes the respective legal identities constructed around data privacy in the EU and the United States. It identifies profound differences in the two systems' images of the individual as bearer of legal interests. The EU has created a privacy culture around “rights talk” that protects its “data subjects.” In the EU, moreover, rights talk forms a critical part of the postwar European project of creating the identity of a European citizen. In the United States, in contrast, the focus is on a “marketplace discourse” about personal information and the safeguarding of “privacy consumers.” In the United States, data privacy law focuses on protecting consumers in a data marketplace. This Article uses its models of rights talk and marketplace discourse to analyze how the EU and United States protect their respective data subjects and privacy consumers. Although the differences are great, there is still a path forward. A new set of institutions and processes can play a central role in developing mutually acceptable standards of data privacy. The key documents in this regard are the General Data Protection Regulation, an EU-wide standard that becomes binding in 2018, and the Privacy Shield, an EU--U.S. treaty signed in 2016. These legal standards require regular interactions between the EU and United States and create numerous points for harmonization, coordination, and cooperation. The GDPR and Privacy Shield also establish new kinds of governmental networks to resolve conflicts. The future of international data privacy law rests on the development of new understandings of privacy within these innovative structures.
Ignacio N. Cofone、THE DYNAMIC EFFECT OF INFORMATION PRIVACY LAW (信息隐私法的动态效应)、Minnesota Journal of Law, Science & Technology(Spring, 2017)
Discussions of information privacy typically rely on the idea that there is a tradeoff between privacy and availability of information. But privacy, under some circumstances, can lead to creation of more information. In this article, I identify such circumstances by exploring the ex ante incentives created by entitlements to personal data and evaluating the long-term effects of privacy. In so doing, I introduce an economic justification of information privacy law.Under the standard law & economics account, as long as property rights are defined and transaction costs are low, initial right allocations should be irrelevant for social welfare. But initial allocations matter when either of these two conditions is absent. Allocations also matter for production of goods that do not yet exist. Personal information has these characteristics. While the costs of disseminating information are low, transaction costs to transfer an entitlement over it are not. In addition, availability of information requires disclosure--and thereby imposes costs. This analysis challenges the traditional economic objection to information privacy and provides a new justification for privacy rules by casting them as entitlements over personal information.The approach I develop here provides a framework to identify which types of information ought to be protected and how privacy law should protect them. To do so, it analyzes the placement and *518 optimal protection of personal information entitlements while also examining the commonalities between information privacy and intellectual property. At a more abstract level, it sheds light on the desirability of a sectoral versus an omnibus information privacy law.
Neil Richards and Woodrow Hartzog、PRIVACY'S TRUST GAP: A REVIEW OBFUSCATION: A USER'S GUIDE FOR PRIVACY AND PROTEST BY FINN BRUNTON AND HELEN NISSENBAUM CAMBRIDGE AND LONDON: THE MIT PRESS, 2015(隐私信任鸿沟:一个评论)、Yale Law Journal(February, 2017)
This Book Review proceeds in four parts. In Part I, we discuss the central arguments and contributions of Obfuscation through the lens of the standard individualistic conception of privacy. We welcome the book's pragmatism and leveraging of practical, financial, and cognitive limitations to frustrate those who would engage in surveillance and data collection. However, we critique Obfuscation's adoption of the individualistic conception of privacy. This account, which is the dominant story of privacy for both regulators and citizens, has been handicapped by a conceptual vocabulary that fails to fully take the importance of relationships and trust into account. Modern privacy policy and discourse thus have a trust gap, failing to account for the importance of trust to our digital society, and failing to provide the incentives to create that trust. By accepting the dominant frame, and by encouraging distrust over trust, obfuscation theory not only falls into the trust gap, but deepens it. Against the backdrop of privacy's trust gap, we then offer both an internal and an external critique of Brunton and Nissenbaum's obfuscation theory. We develop our internal critique in Part II, taking issue with Brunton and Nissenbaum's description of obfuscation as a largely solitary and independent strategy. We argue that even within the parameters of obfuscation theory, people often have to depend upon others to obfuscate effectively. Unless people can trust designers, intermediaries, confederates, and lawmakers to help them obfuscate, the tactic will frequently fail. It is those who must trust others, the weak and vulnerable, who need obfuscation the most. Yet by feeding bad data into the system, obfuscation can have the perverse effect of further corroding social trust. In Part III, we offer a broader, external critique of obfuscation. We caution against leveraging the wisdom of obfuscation into a premature guerrilla war for our privacy. Such a strategy has an undeniable romantic appeal, but we do not yet need to resort to a guerilla war of individuals against the powerful institutions that seek our data. As lawyers, we believe that the first-best solution to problems of social power that Obfuscation catalogs is not revolution, but regulation. Although it may not always be obvious, privacy is not doomed. Law and public policy can and should play a role in promoting trust and privacy. Contrary *1188 to popular and legal rhetoric about the “death of privacy,” 16 there is substantial evidence that the campaign for privacy rights can be not only viable, but also effective. It would be a mistake to cede the high ground of legal reform to fend for ourselves by embracing self-help obfuscation at the expense of trust-based solutions like confidentiality, data ethics, transparency, and data security. But by ignoring both the current evidence that privacy law can do helpful work and rejecting the potential of law, this is essentially the strategy that Brunton and Nissenbaum recommend. In Part IV, we offer an alternative frame for thinking about privacy problems in the digital age. We propose that a conceptual revolution based upon trust is a better path forward than one based on obfuscation. Drawing upon both our prior work and that of the growing community of scholars working at the intersection of privacy and trust, we offer a blueprint for trust in our digital society. This consists of four foundations of trust--the commitment to be honest about data practices, the importance of discretion in data usage, the need for protection of personal data against outsiders, and the overriding principle of loyalty to the people whose data is being used, so that it is data and not humans that become exploited. We argue that we must recognize the importance of information relationships in our networked, data- driven society. There exist substantial incentives already for digital intermediaries to build trust. But when incentives and markets fail, the obligation for promoting trust must fall to law and policy. The first-best privacy future will remain one in which privacy is safeguarded by law, as well as private ordering and self-help.
Gabriela Kennedy and Xiaoyan Zhang、CHINA PASSES CYBERSECURITY LAW(中国通过网络安全法)、Intellectual Property & Technology Law Journal(March, 2017)
On November 7, 2016, the Standing Committee of the National People's Congress of China (NPC) passed the controversial Cybersecurity Law (the CSL). The CSL has gone through three readings since the release of the first draft on July 6, 2015 and will take effect in June 2017. As China's first comprehensive privacy and security regulation for cyberspace, the CSL enhances data protection in many aspects while bringing in compliance challenges for the international community at the same time.
Paul R. Gaus、ONLY THE GOOD REGULATIONS DIE YOUNG: RECOGNIZING THE CONSUMER BENEFITS OF THE FCC'S NOW-DEFUNCT PRIVACY REGULATIONS(只有良好的法规才能夭折:认识FCC现已失效的隐私条例的消费者利益)、Minnesota Journal of Law, Science & Technology(Spring, 2017)
This Note argues that the FCC's recent rulemaking provides a promising framework to spur much-needed change regarding data privacy practices. The rules are not a panacea. They target only a subset of the vast internet ecosystem, but they favor consumers. They are especially desirable when considering the FTC's limitations in this area and the judiciary's reluctance to hear consumer data cases even in the face of clear statutory violations. Section I.A of this Note provides a brief explanation of the key entities in the internet ecosystem. Section I.B defines consumer privacy. It explores theoretical concepts and policy proposals urging for greater transparency and choice for consumers relating to their personally identifiable information. Section I.C discusses the FTC's authority to police privacy interests. Section I.D then outlines the FCC's traditional jurisdiction, the recent Open Internet Order, and the subsequent FCC rulemaking. It then describes consumers' fluctuating access to Courts to litigate their own privacy interests, including the Supreme Court's recent opinion in Spokeov.Robins.13 Part II of this Note argues the FCC's recent rulemaking is the most effective federal mechanism thus far for protecting consumer privacy interests. It begins by outlining the limitations on the FTC's ability to enforce consumer privacy interests. Part II then argues that the judiciary's commitment to Article III standing impedes consumers' ability to litigate their own privacy interests. Considering these significant obstacles, this Note analyzes how the FCC's regime provides advantages to consumers in ways the FTC and the Courts cannot, or will not, do.
Ifeoma Ajunwa andKate Crawford,Jason Schultz、LIMITLESS WORKER SURVEILLANCE(无限的工人监督)、California Law Review(June, 2017)
From the Pinkerton private detectives of the 1850s, to the closed-circuit cameras and email monitoring of the 1990s, to new apps that quantify the productivity of workers, and to the collection of health data as part of workplace wellness programs, American employers have increasingly sought to track the activities of their employees. Starting with Taylorism and Fordism, American workers have become accustomed to heightened levels of monitoring that have only been mitigated by the legal counterweight of organized unions and labor laws. Thus, along with economic and technological limits, the law has always been presumed as a constraint on these surveillance activities. Recently, technological advancements in several fields--big data analytics, communications capture, mobile device design, DNA testing, and biometrics--have dramatically expanded capacities for worker surveillance both on and off the job. While the cost of many forms of surveillance has dropped significantly, new technologies make the surveillance of workers even more convenient and accessible, and labor unions have become much less powerful in advocating for workers. The American worker must now contend with an all-seeing Argus Panoptes built from technology that allows for the trawling of employee data from the Internet and the employer collection of productivity data and health data, with the ostensible consent of the worker. This raises the question of whether the law still remains a meaningful avenue to delineate boundaries for worker surveillance.In this Article, we start from the normative viewpoint that the right to privacy is not an economic good that may be exchanged for the opportunity for employment. We then examine the effectiveness of the law as a check on intrusive worker surveillance, given recent technological innovations. In particular, we focus on two popular trends in worker tracking--productivity apps and worker wellness programs--to argue that current legal constraints are insufficient and may leave American workers at the mercy of 24/7 employer monitoring. We consider three possible approaches to remedying this deficiency of the law: (1) a comprehensive omnibus federal information privacy law, similar to approaches taken in the European Union, which would protect all individual privacy to various degrees regardless of whether or not one is at work or elsewhere and without regard to the sensitivity of the data at issue; (2) a narrower, sector-specific Employee Privacy Protection Act (EPPA), which would focus on prohibiting specific workplace surveillance practices that extend outside of work-related locations or activities; and (3) an even narrower sector and sensitivity-specific Employee Health Information Privacy Act (EHIPA), which would protect the most sensitive type of employee data, especially those that could arguably fall outside of the Health Insurance Portability and Accountability Act's (HIPAA) jurisdiction, such as wellness and other data related to health and one's personhood.
The Harvard Law Review Association、FOURTH AMENDMENT--THIRD-PARTY DOCTRINE--FOURTH CIRCUIT HOLDS THAT GOVERNMENT ACQUISITION OF HISTORICAL CELL-SITE LOCATION INFORMATION IS NOT A SEARCH.--UNITED STATES v. GRAHAM, 824 F.3D 421 (4TH CIR. 2016) (EN BANC) (第四条修正案--第三方原则--第四巡回法院认为,政府对历史站点信息信息的取得不是搜索。--美国诉格雷厄姆案,824卷421(en banc))、Harvard Law Review(February, 2017)
The Supreme Court has held that people cannot reasonably expect privacy in information they willingly disclose to third parties and, thus, that government intrusions on such information are not Fourth Amendment searches. 1 Lower courts have also held that historical cell-site location information (CSLI)--a carrier's records of the cell tower used to route a user's calls and messages (typically the tower closest to the user) 2 --is such information willingly disclosed to third parties. 3 Recently, in United States v. Graham, 4 the Fourth Circuit upheld that rule, finding that two defendants could not reasonably expect privacy in CSLI that police used to place them at the crime scene. That holding shows the third-party doctrine's flaw: in its focus on categorizing behavior, it does not accurately estimate what society today would consider reasonable. Courts should update the doctrine to reflect our complex and changing relationship with technology.
Matthew B. Kugler and Lior Jacob Strahilevitz、The Myth of Fourth Amendment Circularity(第四修正案循环的神话)、The University of Chicago Law Review(Fall 2017)
The Supreme Court's decision in Katz v United States made people's reasona- ble expectations of privacy the touchstone for determining whether state surveillance amounts to a search under the Fourth Amendment Ever since Katz, Supreme Court justices and numerous scholars have referenced the inherent circularity of taking the expectations-of-privacy framework literally : people's expectations of privacy depend on Fourth Amendment law , so it is circular to have the scope of the Fourth Amendment depend on those same expectations. Nearly every scholar who has written about the issue has assumed that the circularity of expectations is a meaningful impediment to having the scope of the Fourth Amendment depend on what ordinary people actually expect. But no scholar has tested the circularity narrative's essential premise : that popular sentiment falls into line when salient , well-publicized changes in Fourth Amendment law occur.
Our Article conducts precisely such a test. We conducted surveys on census- weighted samples of US citizens immediately before , immediately after , and long after the Supreme Court's landmark decision in Riley v California. The decision in Riley was unanimous and surprising. It substantially altered Fourth Amendment law on the issue of the privacy of people's cell phone content , and it was a major news story that generated relatively high levels of public awareness in the days after it was decided. We find that the public began to expect greater privacy in the contents of their cell phones immediately after the Riley decision, but this effect was small and confined to the 40 percent of our sample that reported having heard of the decision. One year after Riley, these heightened expectations had disappeared completely. There was no difference from baseline two years after Riley either, with privacy expectations remaining as they were prior to the decision. Our findings suggest that popular privacy expectations are far more stable than most judges and commentators have been assuming. Even in the ideal circumstance of a clear, unanimous, and widely reported decision , circularity in Fourth Amendment both weak and short lived. In the longer term., Fourth Amendment circularity pears to be a myth.
David E. Pozen、Privacy-Privacy Tradeoffs(隐私-隐私权衡)、The University of Chicago Law Review(2016)
Legal and policy debates about privacy revolve around conflicts between prìvacy and other goods. But privacy also conflicts with itself Whenever securing privacy on one margin compromises privacy on another margin , a privacy-privacy tradeoff arises. This Essay introduces the phenomenon of privacy-privacy tradeoffs , with particular attention to their role in NSA surveillance. After explaining why these tradeoffs are pervasive in modern society and developing a typology , the Essay shows that many of the arguments made by the NSA's defenders appeal not only to a national-security need but also to a privacy-privacy tradeoff. An appreciation of these tradeoffs , the Essay contends , illuminates the structure and the stakes of debates over surveillance law specifically and privacy policy generally.
Stacy-Ann Elvy、PAYING FOR PRIVACY AND THE PERSONAL DATA ECONOMY(以隐私和个人数据为代价支付的经济)、Columbia Law Review(Oct-17)
Growing demands for privacy and increases in the quantity and variety of consumer data have engendered various business offerings to allow companies , and in some instances consumers , to capitalize on these developments. One such example is the emerging "personal data economy" (PDE) in which companies, such as Datacoup, purchase data directly from individuals. At the opposite end of the spectrum, the "pay-for-privacy" (PFP) model requires consumers to pay an additional fee to prevent their data from being collected and mined for advertising purposes. This Article conducts a simultaneous in-depth exploration of the impact of burgeoning PDE and PFP models. It identifies a typology of data-business models, and it uncovers the similarities and tensions between a data market controlled by established companies that have historically collected and mined consumer data for their primary benefit and one in which consumers play a central role in monetizing their own data. The Article makes three claims. First, it contends that PFP models facilitate the transformation of privacy into a tradable product, may engender or worsen unequal access to privacy, and could further enable predatory and discriminatory behavior. Second, while the PDE may allow consumers to regain a semblance of control over their information by enabling them to decide when and with whom to share their data, consumers ' direct transfer or disclosure of personal data to companies for a price or personalized deals creates challenges similar to those found in the PFP context and generates additional concerns associated with innovative monetization techniques. Third, existing frameworks and proposals may not sufficiently ameliorate these concerns. The Article concludes by offering a path forward.
Sofia Grafanaki、AUTONOMY CHALLENGES IN THE AGE OF BIG DATA(大数据时代的自治挑战)、Fordham Intellectual Property, Media and Entertainment Law Journal(Summer, 2017)
This Article examines how technological advances in the field of “Big Data” challenge meaningful individual autonomy (and by extension democracy), are redefining the process of self-formation and the relationship between self and society, and can cause harm that cannot be addressed under current regulatory frameworks. Adopting a theory of autonomy that includes both the exploration process an individual goes through in order to develop authentic and independent desires that lead to his actions, as well as the independence of the actions and decisions themselves, this Article identifies three distinct categories of autonomy challenges that Big Data technologies present. The first is the increasing rise of lots of “little brothers,” putting individuals in a state of constant surveillance, the very knowledge of which undermines individual self-determination. In the governmental context, the idea of always being watched has long been established as a threat to freedom of expression, free speech, “intellectual privacy,” and associational freedoms. The discussion does not focus on government surveillance per se, but draws from the same reasoning to illustrate how similar dangers are present even when it is not the government or a single entity behind the surveillance. The second is an algorithmic self-reinforcing loop in every aspect of our lives, as in a world where everything is tracked, the “choices” one is given are based on assumptions about him, and these same “choices” are the ones that determine and become the new assumption, thereby creating a constantly fortified self-fulfilling prophecy. The very structure of the algorithms used is based on statistical models trained to ignore outliers, collect (im)perfect information about the past and use that to recreate the *804 future. This is true both on an individual level and for society more generally.The third is the use of persuasive computing techniques, allowing companies to move beyond simply measuring customer behavior to creating products that are designed with the specific goal of forming new habits. Finally, this Article demonstrates the need for the development of a vocabulary to assess the ethical, political, and sociological values of these algorithms, and for a full set of ethical norms that can lay the foundations of democracy on the web.