EFF: LAPD Requested Ring Footage of Black Lives Matter Protests

LAPD Requested Ring Footage of Black Lives Matter Protests comes from the Electronic Frontier Foundation, a nonprofit organization defending civil liberties in the digital world. Ring is Amazon’s home security company most known for its doorbell camera.

Along with other civil liberties organizations and activists, EFF has long warned that Amazon Ring and other networked home surveillance devices could be used to monitor political activity and protests. Now we have documented proof that our fears were founded.

According to emails obtained by EFF, the LAPD sent requests to Amazon Ring users specifically targeting footage of Black-led protests against police violence that occurred in cities across the country last summer. While it is clear that police departments and federal law enforcement across the country used many different technologies to spy on protests, including aerial surveillance and semi-private camera networks, this is the first documented evidence that a police department specifically requested footage from networked home surveillance devices related to last summer’s political activity.

 

A map of Ring-police partnerships in the United States. Clicking the map will bring you to an interactive version.

 

In May 2019, LAPD became the 240th public safety agency to sign a formal partnership with Ring and it’s associated app, Neighbors. That number has now skyrocketed to more than 2,000 government agencies. The partnerships allow police to use a law-enforcement portal to canvass local residents for footage.

Requests from police to Ring users typically contain the name of the investigating detective and an explanation of what incident they are investigating. Police requesting footage also specify a time period, usually a range spanning several hours, because it’s often hard to identify exactly what time certain crimes occurred, such as an overnight car break-in.

 

A June 16, 2020 email showing an LAPD request for footage to an Amazon Ring user.

In its response to EFF’s public records requests, the LAPD produced several messages it sent to Ring users, but redacted details such as the circumstances being investigated and the dates and times of footage requested. However, one email request on behalf of the LAPD “Safe L.A. Task Force” specifically asked for footage related to “the recent protests.” Troublingly, the LAPD also redacted the dates and times sought for the requested footage. This practice is concerning, because if police request hours of footage on either side of a specific incident, they may receive hours of people engaging in First Amendment protected activities with a vague hope that a camera may have captured illegal activity at some point. Redacting the hours of footage the LAPD requested is a cover up of the amount of protest footage the police department sought to acquire.

EFF asked the LAPD for clarification of the specific context under which the department sent requests concerning the protests. The LAPD would not cite a specific crime they were investigating, like a theft from a specific storefront or an act of vandalism. Instead, the LAPD told EFF, “SAFE LA Task Force used several methods in an attempt to identify those involved in criminal behavior.”

Their full response reads:

The SAFE LA Task Force used several methods in an attempt to identify those involved in criminal behavior. One of the methods was surveillance footage. It is not uncommon for investigators to ask businesses or residents if they will voluntarily share their footage with them. Often, surveillance footage is the most valuable piece in an investigators case.

Police have used similar tactics before. EFF investigated the San Francisco Police Department’s use of a Business Improvement District’s network of over 400 cameras to spy on protests in early June 2020, under the guise of public safety and situational awareness. We learned that police gained over a week of live access to the camera network, as well as a 12-hour “data dump” of footage from all cameras in the network. In October 2020, EFF and ACLU of Northern California filed a lawsuit against the City and County of San Francisco on behalf of three protesters. We seek a court order requiring the city to comply with the city’s Surveillance Technology Ordinance by prohibiting the SFPD from acquiring, borrowing, or using non-city networks of surveillance cameras absent prior approval from the city’s Board of Supervisors.

The LAPD announced the creation of the Safe L.A. Task Force on June 2, 2020, in order to receive tips and investigate protests against police violence that started just four days earlier. The LAPD misleadingly labeled these protests as an “Unusual Occurrence (UO).” The FBI announced they would join the task force “in order to investigate significant crimes that occurred at or near locations where legitimate protests and demonstrations took place in Los Angeles beginning on May 29, 2020.” The Los Angeles Police Department, Beverly Hills Police Department, Santa Monica Police Department, Torrance Police Department, Los Angeles City Fire Department, Los Angeles City Attorney’s Office, Los Angeles County District Attorney’s Office, and United States Attorney’s Office for Los Angeles also joined the task force.

Protests began in Los Angeles County following the Minneapolis police killing of George Floyd on May 25, 2020. LAPD sent a number of requests for Ring footage from users starting at the end of May, but because of the extensive redactions of circumstances, dates, and times, we’re unable to verify if all of those requests are related to the protests. However, some of the detectives associated with the Safe L.A. Task Force are the same people that began requesting Ring footage at the end of May and early June.

 

On June 1, 2020, the same day of Los Angeles’ largest protests, police receive footage from a Ring user.

 

The LAPD’s response shows that on June 1, 2020, the morning after one of the largest protests of last summer in Los Angeles, Det. Gerry Chamberlain sent Ring users a request for footage. Within two hours, Chamberlain received footage from at least one user. The nature of the request was redacted; however, the next day, his unit was formally assigned to the protest task force.

The LAPD’s handling of last summer’s protest are under investigation after widespread complaints about unchecked suppression and use of disproportionate tactics. At least 10 LAPD officers have been taken off the street pending internal investigations of their use of force during the protests.

Technologies like Ring have the potential to provide the police with video footage covering nearly every inch of an entire neighborhood. This poses an incredible risk to First Amendment rights. People are less likely to exercise their right to political speech, protest, and assembly if they know that police can acquire and retain footage of them. This creates risks of retribution or reprisal, especially at protests against police violence. Ring cameras, ubiquitous in many neighborhoods, create the possibility that if enough people share footage with police, authorities are able to follow protestors’ movements, block by block. Indeed, Gizmodo found that on a walk of less than a mile between a school and its gymnasium in Washington D.C., students had to walk by no less than 13 Ring cameras, whose owners regularly posted footage to social media. Activists may need to walk past many more such cameras during a protest.

We Need New Legal Limits on Police Access

This incident once again shows that modern surveillance technologies are wildly underregulated in the United States. A number of U.S. Senators and other elected officials have commented on—and sent inquiries to Amazon—to uncover how few legal restrictions govern this rapidly growing surveillance empire. The United States is ripe for a legislative overhaul to protect bystanders, as well as consumers, from both corporations and government. A great place to start would be stronger limits on government access to data collected by private companies.

One of EFF’s chief concerns is the ease with which Ring-police partnerships allow police to make bulk requests to Ring users for their footage, although a new feature does allow users to opt out of requests. Ring has introduced end-to-end encryption, preventing police from getting footage directly from Amazon, but this doesn’t limit their ability to send these blanket requests to users. Such “consent searches” pose the greatest problems in high-coercion settings, like police “asking” to search your phone during a traffic stop, but they are also highly problematic in less-coercive settings, like bulk email requests for Ring footage from many residents.

Thus, an important way to prevent police from using privately-owned home security devices as political surveillance machines would be to impose strict regulations governing “Internet of Things” consent search requests.

EFF has previously argued that in less-coercive settings, consent searches should be limited by four rules. First, police must have reasonable suspicion that crime is afoot before sending a request to a specific user. Such requests must be specific, targeting a particular time and place where there is reasonable suspicion that crime has happened, rather than general requests that, for example, blanket an entire neighborhood for an entire day in order to investigate one broken window. Second, police must collect and publish statistics about their consent searches of electronic devices, to deter and detect racial profiling. Third, police and reviewing courts must narrowly construe the scope of a person’s consent to search their device. Fourth, before an officer attempts to acquire footage from a person’s Ring camera, the officer must notify the person of their legal right to refuse.

Ring has made some positive steps concerning its user’s privacy—but the privacy of everyone else in the neighborhood is still in jeopardy. The growing ubiquity of Ring means that if the footage exists, police will continue to access more and more of it. The LAPD’s use of Ring cameras to gather footage of protesters should be a big red flag for politicians.

You can view the emails between Ring and the LAPD below:

 

Tenth Amendment Center: Gov’t Worried that Mask Use Thwarts Gov’t Facial Recognition

From the Tenth Amendment Center, DHS Worried Widespread Mask Use Will Thwart Government Facial Recognition.

There has been a lot of controversy over masks, but no matter what you think about the efficacy of face coverings in preventing the spread of COVID-19, there is one advantage to masking up. The U.S. Department of Homeland Security (DHS) has expressed concern that widespread use of masks will thwart facial recognition.

A DHS “intelligence note” dated May 22 came to light in the BlueLeaks trove of law enforcement documents. The DHS Intelligence Enterprise Counterterrorism Mission Center in conjunction with a variety of other agencies, including Customs and Border Protection and Immigration and Customs Enforcement drafted the note. It “examines the potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems — such as video cameras, image processing hardware and software, and image recognition algorithms — to monitor public spaces during the ongoing Covid-19 public health emergency and in the months after the pandemic subsides.”

According to The Intercept, the Minnesota Fusion Center distributed the notice on May 26, as protests over the killing of George Floyd were ramping up. “Email logs included in the BlueLeaks archive show that the note was also sent to city and state government officials and private security officers in Colorado and, inexplicably, to a hospital and a community college.”

The note warned, “We assess violent extremists and other criminals who have historically maintained an interest in avoiding face recognition are likely to opportunistically seize upon public safety measures recommending the wearing of face masks to hinder the effectiveness of face recognition systems in public spaces by security partners.”

The note also expresses more general concern about mask-wearing. One header reads, “Face Recognition Systems Likely to be Less Effective as Widespread Wear of Face Coverings for Public Safety Purposes Continue,”

“We assess face recognition systems used to support security operations in public spaces will be less effective while widespread public use of facemasks, including partial and full face covering, is practiced by the public to limit the spread of Covid-19.”

The debate on masking aside, thwarting facial recognition is a good thing because the federal government is aggressively pushing the expansion of its vast and increasingly intrusive facial recognition network.

THE GROWING FEDERAL PROGRAM

recent report revealed that the federal government has turned state drivers’ license photos into a giant facial recognition database, putting virtually every driver in America in a perpetual electronic police lineup. The revelations generated widespread outrage, but this story isn’t new. The federal government has been developing a massive, nationwide facial recognition system for years.

The FBI rolled out a nationwide facial-recognition program in the fall of 2014, with the goal of building a giant biometric database with pictures provided by the states and corporate friends.

In 2016, the Center on Privacy and Technology at Georgetown Law released “The Perpetual Lineup,” a massive report on law enforcement use of facial recognition technology in the U.S. You can read the complete report at perpetuallineup.org. The organization conducted a year-long investigation and collected more than 15,000 pages of documents through more than 100 public records requests. The report paints a disturbing picture of intense cooperation between the federal government, and state and local law enforcement to develop a massive facial recognition database.

“Face recognition is a powerful technology that requires strict oversight. But those controls, by and large, don’t exist today,” report co-author Clare Garvie said. “With only a few exceptions, there are no laws governing police use of the technology, no standards ensuring its accuracy, and no systems checking for bias. It’s a wild west.”

There are many technical and legal problems with facial recognition, including significant concerns about the accuracy of the technology, particularly when reading the facial features of minority populations. During a test run by the ACLU of Northern California, facial recognition misidentified 26 members of the California legislature as people in a database of arrest photos.

With facial recognition technology, police and other government officials have the capability to track individuals in real-time. These systems allow law enforcement agents to use video cameras and continually scan everybody who walks by. According to the report, several major police departments have expressed an interest in this type of real-time tracking. Documents revealed agencies in at least five major cities, including Los Angeles, either claimed to run real-time face recognition off of street cameras, bought technology with the capability, or expressed written interest in buying it.

In all likelihood, the federal government heavily involves itself in helping state and local agencies obtain this technology. The feds provide grant money to local law enforcement agencies for a vast array of surveillance gear, including ALPRs, stingray devices and drones. The federal government essentially encourages and funds a giant nationwide surveillance net and then taps into the information via fusion centers and the Information Sharing Environment (ISE).

Fusion centers were sold as a tool to combat terrorism, but that is not how they are being used. The ACLU pointed to a bipartisan congressional report to demonstrate the true nature of government fusion centers: “They haven’t contributed anything meaningful to counterterrorism efforts. Instead, they have largely served as police surveillance and information sharing nodes for law enforcement efforts targeting the frequent subjects of police attention: Black and brown people, immigrants, dissidents, and the poor.”

Fusion centers operate within the broader ISE. According to its website, the ISE “provides analysts, operators, and investigators with information needed to enhance national security. These analysts, operators, and investigators…have mission needs to collaborate and share information with each other and with private sector partners and our foreign allies.” In other words, ISE serves as a conduit for the sharing of information gathered without a warrant. Known ISE partners include the Office of Director of National Intelligence which oversees 17 federal agencies and organizations, including the NSA. ISE utilizes these partnerships to collect and share data on the millions of unwitting people they track.

Reports that the Berkeley Police Department in cooperation with a federal fusion center deployed cameras equipped to surveil a “free speech” rally and Antifa counterprotests provided the first solid link between the federal government and local authorities in facial recognition surveillance.

See also EFF’s San Francisco Police Accessed Business District Camera Network to Spy on Protestors

 

EFF: Searchable Database of Police Tech Tools Used to Spy on Communities

The Electronic Frontier Foundation reports that is has launched an online map overlay of police tools being used for surveillance across the US in EFF Launches Searchable Database of Police Agencies and the Tech Tools They Use to Spy on Communities.

The Electronic Frontier Foundation (EFF), in partnership with the Reynolds School of Journalism at the University of Nevada, Reno, today launched the largest-ever collection of searchable data on police use of surveillance technologies, created as a tool for the public to learn about facial recognition, drones, license plate readers, and other devices law enforcement agencies are acquiring to spy on our communities.

The Atlas of Surveillance database, containing several thousand data points on over 3,000 city and local police departments and sheriffs’ offices nationwide, allows citizens, journalists, and academics to review details about the technologies police are deploying, and provides a resource to check what devices and systems have been purchased locally.

Users can search for information by clicking on regions, towns, and cities, such as Minneapolis, Tampa, or Tucson, on a U.S. map. They can also easily perform text searches by typing the names of cities, counties, or states on a search page that displays text results. The Atlas also allows people to search by specific technologies, which can show how surveillance tools are spreading across the country.

Built using crowdsourcing and data journalism over the last 18 months, the Atlas of Surveillance documents the alarming increase in the use of unchecked high-tech tools that collect biometric records, photos, and videos of people in their communities, locate and track them via their cell phones, and purport to predict where crimes will be committed.

While the use of surveillance apps and face recognition technologies are under scrutiny amid the COVID-19 pandemic and street protests, EFF and students at University of Nevada, Reno, have been studying and collecting information for more than a year in an effort to, for the first time, aggregate data collected from news articles, government meeting agendas, company press releases, and social media posts.

“There are two questions we get all the time: What surveillance is in my hometown, and how are technologies like drones and automated license plate readers spreading across the  country?” said Dave Maass, a senior investigative researcher in EFF’s Threat Lab and a visiting professor at the Reynolds School of Journalism. “A year a half ago, EFF and the Reynolds School partnered to answer these questions through a massive newsgathering effort, involving hundreds of journalism students and volunteers. What we found is a sprawling spy state that reaches from face recognition in the Hawaiian Islands to predictive policing in Maine, from body-worn cameras in remote Alaska to real-time crime centers along Florida’s Gold Coast.”

Information was collected on the most pervasive surveillance technologies in use, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, police partnerships with Amazon’s Ring camera network, and gunshot detection sensors. It also maps out more than 130 law enforcement tech hubs that process real-time surveillance data. While the Atlas contains a massive amount of data, its content is only the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies. Reporters, students, volunteers, and watchdog groups can submit data or share data sets for inclusion in the Atlas.

“The prevalence of surveillance technologies in our society provides many challenges related to privacy and freedom of expression, but it’s one thing to know that in theory, and another to see hard data laid out on a map,” Reynolds School Professor and Director of the Center for Advanced Media Studies Gi Yun said. “Over a year and a half, Reynolds School of Journalism students at the University of Nevada, Reno have reviewed thousands of news articles and public records. This project not only informs the public debate but helps these students improve their understanding of surveillance as they advance in their reporting careers.”

For the Atlas:
https://atlasofsurveillance.org

For more on street-level surveillance:
https://www.eff.org/issues/street-level-surveillance

EFF: COVID-19 and Digital Rights

The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Here are their thoughts on threats and opportunities arising from COVID-19 response, COVID-19 and Digital Rights.

Surveillance. Governments around the world are demanding extraordinary new surveillance powers that many hope will contain the virus’ spread. But many of these powers would invade our privacy, inhibit our free speech, and disparately burden vulnerable groups of people. Mindful of the stakes, we ask three questions when analyzing proposals that would provide greater surveillance powers to the government: Would the proposal work? Would it excessively intrude on our freedoms? Are there sufficient safeguards? Different proposals raise different issues. For example:

  • Government has not shown that some intrusive technologies would work, such as phone location surveillance, which is insufficiently granular to identify when two people were close enough together to transmit the virus.
  • Some surveillance proposals are too dangerous to a democratic society, such as dragnet surveillance cameras in public places that use face recognition or thermal imaging, mounting such technologies on drones, or giving police officers access to public health data about where people who have tested positive live.
  • Some technologies, such as aggregate location data used to inform public health decisions, need strict safeguards.
  • No COVID tracking app will work absent widespread testing and interview-based contact tracing. Bluetooth proximity is the most promising approach so far, but needs rigorous security testing and data minimization. No one should be forced to use it.

Many new government surveillance programs are being built in partnership with corporations that hold vast stores of consumers’ personal data. We need new laws to protect our data privacy.

Free speech. The free flow of ideas about COVID-19 is vital. This includes anonymous whistle-blowing about containment efforts, online criticisms of government responses to the crisis, and prisoner access to social media to tell the world about outbreaks behind bars. Governments will inevitably abuse any new powers to censor what they deems false information about the virus. When online platforms increase their reliance on automated content moderation, in part because human moderators cannot safely come to work, those moderation “decisions” must be temporary, transparent, and easily appealable

Government transparency. Government decision-making about the virus must be transparent. When governments temporarily close the physical spaces where they make decisions, for purposes of social distancing, they must adopt new transparency accommodations, such as broadcasting their proceedings. While government responses to public records requests may be slower during this public health crisis, the outbreak is no excuse to shut them down altogether…(continues)

EFF: EARN IT Bill to Scan Every Online Message

From digital civil liberties champion Electronic Frontier Foundation, The EARN IT Bill Is the Government’s Plan to Scan Every Message Online

Imagine an Internet where the law required every message sent to be read by government-approved scanning software. Companies that handle such messages wouldn’t be allowed to securely encrypt them, or they’d lose legal protections that allow them to operate.

That’s what the Senate Judiciary Committee has proposed and hopes to pass into law. The so-called EARN IT bill, sponsored by Senators Lindsay Graham (R-SC) and Richard Blumenthal (D-CT), will strip Section 230 protections away from any website that doesn’t follow a list of “best practices,” meaning those sites can be sued into bankruptcy. The “best practices” list will be created by a government commission, headed by Attorney General Barr, who has made it very clear he would like to ban encryption, and guarantee law enforcement “legal access” to any digital message.

The EARN IT bill had its first hearing today, and its supporters’ strategy is clear. Because they didn’t put the word “encryption” in the bill, they’re going to insist it doesn’t affect encryption.

“This bill says nothing about encryption,” co-sponsor Sen. Blumenthal said at today’s hearing. “Have you found a word in this bill about encryption?” he asked one witness.

It’s true that the bill’s authors avoided using that word. But they did propose legislation that enables an all-out assault on encryption. It would create a 19-person commission that’s completely controlled by the Attorney General and law enforcement agencies. And, at the hearing, a Vice-President at the National Center for Missing and Exploited Children (NCMEC) made it clear [PDF] what he wants the best practices to be. NCMEC believes online services should be made to screen their messages for material that NCMEC considers abusive; use screening technology approved by NCMEC and law enforcement; report what they find in the messages to NCMEC; and be held legally responsible for the content of messages sent by others.

You can’t have an Internet where messages are screened en masse, and also have end-to-end encryption any more than you can create backdoors that can only be used by the good guys. The two are mutually exclusive. Concepts like “client-side scanning” aren’t a clever route around this; such scanning is just another way to break end-to-end encryption. Either the message remains private to everyone but its recipients, or it’s available to others…

Click here to read the entire article at EFF.org.

EFF: Dangers to Privacy in EARN IT Act

The EARN IT Act introduced by Senator Lindsay Graham purports to be for the prevention of online child exploitation “and other purposes.” It’s those other purposes that we need to watch. The EFF, an organization fighting for your digital civil liberties, writes the article Congress Must Stop the Graham-Blumenthal Anti-Security Bill, expounding upon the many dangers lurking inside this bill.

There’s a new and serious threat to both free speech and security online. Under a draft bill that Bloomberg recently leaked, the Attorney General could unilaterally dictate how online platforms and services must operate. If those companies don’t follow the Attorney General’s rules, they could be on the hook for millions of dollars in civil damages and even state criminal penalties.

The bill, known as the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, grants sweeping powers to the Executive Branch. It opens the door for the government to require new measures to screen users’ speech and even backdoors to read your private communications—a stated goal of one of the bill’s authors.

Senators Lindsay Graham (R-SC) and Richard Blumenthal (D-CT) have been quietly circulating a draft version of EARN IT. Congress must forcefully reject this dangerous bill before it is introduced.

EARN IT Is an Attack on Speech

EARN IT undermines Section 230, the most important law protecting free speech online. Section 230 enforces the common-sense principle that if you say something illegal online, you should be the one held responsible, not the website or platform where you said it (with some important exceptions)…

EARN IT is a direct threat to constitutional protections for free speech and expression. To pass constitutional muster, a law that regulates the content of speech must be as narrowly tailored as possible so as not to chill legitimate, lawful speech. Rather than being narrowly tailored, EARN IT is absurdly broad: under EARN IT, the Commission would effectively have the power to change and broaden the law however it saw fit, as long as it could claim that its recommendations somehow aided in the prevention of child exploitation. Those laws could change and expand unpredictably, especially after changes in the presidential administration…

Throughout his term as Attorney General, William Barr has frequently and vocally demanded “lawful access” to encrypted communications, ignoring the bedrock technical consensus that it is impossible to build a backdoor that is only available to law enforcement. Barr is far from the first administration official to make impossible demands of encryption providers: he joins a long history of government officials from both parties demanding that encryption providers compromise their users’ security.

We know how Barr is going to use his power on the “best practices” panel: to break encryption. He’s said, over and over, that he thinks the “best practice” is to always give law enforcement extraordinary access. So it’s easy to predict that Barr would use EARN IT to demand that providers of end-to-end encrypted communication give law enforcement officers a way to access users’ encrypted messages. This could take the form of straight-up mandated backdoors, or subtler but no less dangerous “solutions” such as client-side scanning. These demands would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their own users’ security, making all of us more vulnerable to criminals…

Weakening Section 230 makes it much more difficult for a startup to compete with the likes of Facebook or Google. Giving platforms a legal requirement to screen or filter users’ posts makes it extremely difficult for a platform without the resources of the big five tech companies to grow its user base (and of course, if a startup can’t grow its user base, it can’t get the investment necessary to compete)…

Click here to read the entire article at EFF

 

EFF Assists in Right to Repair Law

Cory Doctorow of the Electronic Frontier Foundation, a non-profit group which works to protect civil liberties in the digital world, has written about how the EFF is assisting legislation in the state of Massachusetts to help protect vehicle owners’ right to repair their vehicles on their own or at dealer independent service providers. Farmers in our own area are well acquainted with the efforts of tractor manufacturers to limit their right to repair. Back in 2012, Massachusetts became the first state to pass right to repair legislation which ended up improving access to repair information for most of the country. Manufacturers have since redesigned their products to try to avoid those protections.

Bay Staters Continue to Lead in Right to Repair, and EFF Is There to Help

…EFF was pleased to submit comments to the Massachusetts Legislature’s Joint Committee on Consumer Protection and Professional Licensure for a hearing on January 13 in support of HB4122.

In those comments, sent to each member of the Committee, EFF Special Consultant Cory Doctorow wrote:

Auto manufacturers have argued that independent service endangers drivers’ cybersecurity. In reality, the opposite is true: security is weakened by secrecy and strengthened by independent testing and scrutiny. It is an iron law of information security that “there is no security in obscurity”—that is, security cannot depend on keeping defects a secret in the hopes that “bad guys” won’t discover and exploit those defects. And since anyone can design a security system that they themselves can’t imagine any way of breaking, allowing manufacturers to shroud their security measures in secrecy doesn’t mean that their cars can’t be hacked—in fact, history has shown that vehicle computers depending on secrecy for security are, in fact, frequently vulnerable to hacking.

In 2018 and 2019, cities, hospitals, and other large institutions had their informatics systems seized by petty criminals using off-the-shelf ransomware that had combined with a defect in Windows that the NSA had discovered and kept secret—until an NSA leaker released it to the world. As these cities discovered, the NSA’s decision to keep these defects secret did not put them out of reach of bad guys—it just meant that institutional Microsoft customers were put at grave risk, and that Microsoft itself did not know about the devastating bugs in its own products and so could not fix them.

Information security is absolutely reliant upon independent security researchers probing systems and disclosing what they discover. Allowing car manufacturers to monopolize service—and thus scrutiny—over their products ensures that the defects in these fast-moving, heavy machines will primarily become generally known after they are exploited to the potentially lethal detriment of drivers and the pedestrians around them.

The manufacturers’ desire to monopolize bad news about design defects in their own products is especially dire because it rides on the tails of a strategy of monopolizing service and parts for those products. The uncompetitive, concentrated automotive sector has already brought itself to the brink of ruin—averted only by the infusion of $80.7B in tax-funded bailouts. More than a decade later, it remains in dire need of competitive discipline, as is evidenced by a commercial strategy dominated by reducing public choice, surveilling their own customers and selling their data, and extracting monopoly rents from luckless drivers who are locked into their proprietary ecosystems.

EFF: Ending Government Use of Face Surveillance

The Electronic Frontier Foundation (EFF) has launched a new campaign called About Face to help communities call for an end to government use of face surveillance. With the recent announcement that facial recognition is coming to Sea-Tac airport, you can see that face surveillance is becoming more and more prevalent in America.

…Many forms of biometric data collection raise a wealth of privacy, security, and ethical concerns. Face surveillance ups the ante. We expose our faces to public view every time we go outside. Paired with the growing ubiquity of surveillance cameras in our public, face surveillance technology allows for the covert and automated collection of information related to when and where we worship or receive medical care, and who we associate with professionally or socially.

Many proponents of the technology argue that there is no reasonable expectation of privacy when we spend time in public, and that if we have nothing to hide, we have nothing to fear. EFF is not alone in finding this argument meritless. In his recent majority opinion in the watershed Carpenter v. United States (2018), Supreme Court Chief Justice John Roberts wrote: “A person does not surrender all Fourth Amendment protection by venturing in the public sphere.” In a recent Wired interview, Attorney Gretchen Greene explains: “Even if I trust the government, I do care. I would rather live in a world where I feel like I have some privacy, even in public spaces.” Greene goes on to identify the inherent First-Amendment concerns implicated by government use of face surveillance: “If people know where you are, you might not go there. You might not do those things.”

Like many of us, Greene is particularly concerned about how the technology will impact members of already marginalized communities. “Coming out as gay is less problematic professionally than it was, in the US, but still potentially problematic. So, if an individual wants to make the choice [of] when to publicly disclose that, then they don’t want facial recognition technology identifying that they are walking down the street to the LGBTQ center.” These concerns are not limited to any one community, and the impacts will be felt regardless of intent. “We’re not trying to stop people from going to church, we’re not trying to stop them from going to community centers, but we will if they are afraid of [the consequence] in an environment that is hostile to, for instance, a certain ethnicity or a certain religion…”

Click here to read the entire article at EFF.org.

EFF: US-UK Agreement to Allow Warrantless Access to US Internet Servers

This article is from the Electronic Frontier Foundation, which fights for your digital freedoms, about an agreement between the US and the UK which would allow the UK police access to data held by American companies without following US privacy laws or the 4th Amendment.

Congress, Remember the 4th Amendment? It’s Time to Stop the U.S.-UK Agreement.

Unless Congress stops it, foreign police will soon be able to collect and search data on the servers of U.S. Internet companies. They’ll be able to do it without a probable cause warrant, or any oversight from a U.S. judge. This is all happening because of a new law enforcement deal between the U.S. and the United Kingdom. And while it seeks to exclude purely domestic correspondence between U.S. citizens and residents, plenty of Americans’ data will get swept up when they communicate with targeted individuals located abroad.

This is all happening because, for the first time, the U.S. executive branch is flexing its power to enter into law enforcement agreements under the CLOUD Act. We’ve been strongly opposed to this law since it was introduced last year. The recently signed deal between the U.S. Department of Justice and the U.K. Home Office will allow U.K. police easy access to data held by American companies, regardless of where the data is stored. These U.K. data requests, including demands to collect real-time communications, do not need to meet the standards set by U.S. privacy laws or the 4th Amendment. Similarly, the deal will allow U.S. police to grab information held by British companies without following U.K. privacy laws.

This deal, negotiated by American and British law enforcement behind closed doors and without public input, will deal a hammer blow to the legal rights of citizens and residents of both countries. And the damage won’t stop there. The U.S.-U.K. Cloud Act Agreement may well become a model for further bilateral deals with other foreign governments and the United States. Earlier this month, Australian law enforcement agencies began negotiating their own deal to directly access private information held by U.S. Internet companies.

There’s still one possible path to put the brakes on this disastrous U.S.-UK deal: Congress can introduce a joint resolution of disapproval of the agreement within 180 days. This week, EFF has joined 19 other privacy, civil liberties, and human rights organizations to publish a joint letter explaining why Congress must take action to resist this deal.

No Prior Judicial Authorization

In the U.S., the standard for when law enforcement can collect stored communications content is clear: police need to get a warrant, based on probable cause. If police want to wiretap an active conversation, they have to satisfy an even higher standard, sometimes called a “super warrant,” that limits both the timing and use of a wiretap. Perhaps most importantly, stored communications warrants and wiretap warrants have to be signed by a U.S. judge, which adds an extra layer of review to whether privacy standards are met. At EFF, a core part of our work is insisting on the importance of a warrant in many different scenarios.

Judicial authorization is a critical step in the U.S. warrant process. When police search people’s private homes, offices, or devices, they must justify why the search for specific evidence outweighs the presumption that individuals remain free from government intrusion. Judicial authorization acts as a safeguard between citizens and law enforcement. Further, history has shown that police can and will abuse their powers for intimidation, or even personal gain. In colonial times, the British military used general warrants to search through colonists’ houses and seize property—actions that helped fuel a revolution, and formed the basis for the 4th Amendment to the U.S. Constitution.

Incredibly, the DOJ has just thrown those rights away. Instead of relying on probable cause, the new agreement uses an untested privacy standard that says that orders must be based on a “reasonable justification based on articulable and credible facts, particularity, legality, and severity.” No judge in any country has decided what this means. Continue reading “EFF: US-UK Agreement to Allow Warrantless Access to US Internet Servers”

EFF: Big Tech’s Disingenuous Push for a Federal Privacy Law

Following the theme of the earlier article on The Meat Packing Myth is this article from the Electronic Frontier Foundation – an organization leading the fight for digital privacy and free speech — about a push by big tech companies for federal regulation of digital privacy and why this push is in the self-interest of these corporations rather than in support of your actual privacy.

Big Tech’s Disingenuous Push for a Federal Privacy Law

This week, the Internet Association launched a campaign asking the federal government to pass a new privacy law.

The Internet Association (IA) is a trade group funded by some of the largest tech companies in the world, including Google, Microsoft, Facebook, Amazon, and Uber. Many of its members keep their lights on by tracking users and monetizing their personal data. So why do they want a federal consumer privacy law?

Surprise! It’s not to protect your privacy. Rather, this campaign is a disingenuous ploy to undermine real progress on privacy being made around the country at the state level. IA member companies want to establish a national “privacy law” that undoes stronger state laws and lets them continue business as usual. Lawyers call this “preemption.” IA calls this “a unified, national standard” to avoid “a patchwork of state laws.” We call this a big step backwards for all of our privacy.

The question we should be asking is, “What are they afraid of?”

Stronger state laws

After years of privacy scandals, Americans across the political spectrum want better consumer privacy protections. So far, Congress has failed to act, but states have taken matters into their own hands. The Illinois Biometric Information Privacy Act (BIPA), passed in 2008, makes it illegal to collect biometric data from Illinois citizens without their express, informed, opt-in consent. Vermont requires data brokers to register with the state and report on their activities. And the California Consumer Privacy Act (CCPA), passed in 2018, gives users the right to access their personal data and opt out of its sale. In state legislatures across the country, consumer privacy bills are gaining momentum.

This terrifies big tech companies. Last quarter alone, the IA spent nearly $176,000 lobbying the California legislature, largely to weaken CCPA before it takes effect in January 2021. Thanks to the efforts of a coalition of privacy advocates, including EFF, it failed. The IA and its allies are losing the fight against state privacy laws. So, after years of fighting any kind of privacy legislation, they’re now looking to the federal government to save them from the states. The IA has joined Technet, a group of tech CEOs, and Business Roundtable, another industry lobbying organization, in calls for a weak national “privacy” law that will preempt stronger state laws. In other words, they want to roll back all the progress states like California have made, and prevent other states from protecting consumers in the future. We must not allow them to succeed.

A private right of action

Laws with a private right of action allow ordinary people to sue companies when they break the law. This is essential to make sure the law is properly enforced. Without a private right of action, it’s up to regulators like the Federal Trade Commission or the U.S. Department of Justice to go after misbehaving companies. Even in the best of times, regulatory bodies often don’t have the resources needed to police a multi-trillion dollar industry. And regulators can fall prey to regulatory capture. If all the power of enforcement is left in the hands of a single group, an industry can lobby the government to fill that group with its own people. Federal Communications Commission chair Ajit Pai is a former Verizon lawyer, and he’s overseen massive deregulation of the telecom industry his office is supposed to keep in check.

The strongest state privacy laws include private rights of action. Illinois BIPA allows users whose biometric data is illegally collected or handled to sue the companies responsible. And CCPA lets users sue when a company’s negligence results in a breach of personal information. The IA wants to erase these laws and reduce the penalties its member companies can face for their misconduct in legal proceedings brought by ordinary consumers…