Tag Archives: privacy

Where Does eDiscovery Fit in the Facial Recognition Conversation?

ediscovery facial recognition

Where Does eDiscovery Fit in the Facial Recognition Conversation?

For most of us, the concept of facial recognition – like so much technology of the last decade – began as a sci-fi detail we accepted on the big screen but didn’t give much thought to in our day-to-day lives. Then one day, our phones started tagging photos automatically, asking, almost sheepishly, “Is this you?” And just like that, the idea that an algorithm could learn to recognize our faces was real. But we’ve moved on from that innocuous beginning and now are treading more and more into the realm of another type of film (I’m thinking here of Brazil or Minority Report) where technology aids law enforcement in making our world a safer place, but also introduces new privacy and ethics conundrums when that technology fails or is corrupted. And, as always, eDiscovery has a place where legal and tech (in this instance law-enforcement and facial recognition) collide.

Law Enforcement, Facial Recognition, and Privacy Concerns

In a Washington Post article in July 2019, it was revealed that agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement were using facial recognition software to scan state driver’s license databases, analyzing millions of Americans’ photos without their knowledge or consent.

In the past, police have used fingerprints, DNA and other “biometric data” collected from criminal suspects (think of those large binders of mugshots you always see victims flipping through in cop shows), but the photos in DMV records are of a state’s residents, most of whom have never been charged with a crime.

According to the Government Accountability Office, the FBI has logged more than 390,000 facial-recognition searches of federal and local databases, including state DMV databases, since 2011. Even though neither Congress nor state legislatures have authorized the development of such a system, and now lawmakers on both sides of the political spectrum are concerned.

“Law enforcement’s access of state databases,” House Oversight Committee Chairman Elijah E. Cummings (D-Md.) said is “often done in the shadows with no consent.” And Rep. Jim Jordan (Ohio), the House Oversight Committee’s ranking Republican, seemed particularly incensed during a hearing into the technology earlier this year.

“They’ve just given access to that to the FBI,” he said. “No individual signed off on that when they renewed their driver’s license, got their driver’s licenses. They didn’t sign any waiver saying, ‘Oh, it’s okay to turn my information, my photo, over to the FBI.’ No elected officials voted for that to happen.”

Off-The-Shelf Facial Recognition Tools for Law Enforcement

In Oregon, back in 2017, the Washington County Sheriff’s Office became the first law enforcement agency in the country known to use Amazon’s artificial-intelligence tool Rekognition, and almost overnight, the deputies of this small county in the suburbs outside of Portland had ramped up their investigative ability. With this off-the-shelf technology, they were able to scan for matches of a suspect’s face across more than 300,000 mug shots taken at the county jail since 2001. With that information, they can take a picture – perhaps captured by a security camera, social-media account, or cellphone – and link it to an identity.

But linking a photo to previous mug shots is analogous to the same action that happened prior to the addition of technology. It’s something detectives used to do manually but now are assisted with the use of AI. Still, there are significant problems with the technology itself.

Facial Recognition Bans Due to Concerns Ranging from Privacy to Racial Equity

Some places have already taken measures to ban face recognition software. In 2019, San Francisco became the first city to ban the technology for law enforcement and government agencies. Similar measures are under consideration in Oakland and Massachusetts. Lawmakers in California are also considering a statewide ban on facial recognition programs.

To add to that list, the largest manufacturer of police body cameras, Axon, is rejecting the possibility of selling facial recognition technology at the recommendation of an independent ethics board which it created last year after acquiring two artificial intelligence companies.

In a 42-page report, the ethics panel found that face recognition technology is not advanced enough for law enforcement to depend on, with concerns ranging from “privacy costs to racial equity.” In fact, the technology was found to be less accurate in identifying the faces of women than men, and younger people compared to older ones. The same was true in people of color, who were harder to correctly identify than white people.

But the lack of regulation or precedent means that, while the technology is being banned in some places, it’s being pursued in others. For instance, Detroit reportedly signed a $1 million deal for software that let it continuously monitor “hundreds of private and public cameras set up around the city,” including gas stations, restaurants, churches and schools, according to the New York Times.

Facial Recognition and eDiscovery

So where does facial recognition fit in eDiscovery? As with any emerging technology, it’s worthwhile for those of us in legaltech to stay abreast of these changes. For one, any new technology is potential ESI that must be discoverable should a criminal or civil case arise in which that electronic data is evidence. Often, people in legal circles might argue, “That hasn’t happened yet, so we’ll worry about it later.” Then again, people a few decades ago might find it unbelievable that data from a phone app (“phone app?” they might add with a puzzled look) meant for requesting rides from a stranger would be used in a nationally covered murder case.

But there is also the notion that facial recognition tools might be used to help attorneys and litigation support specialists do their work. In fact, the company Veritone recently announced an AI eDiscovery tool that makes unstructured data searchable by keywords, faces, and objects.

As technology continues to forge ahead, it’s important that the legal world engages in the conversations surrounding these technologies, before they find themselves deeper in the state of catch up many say they’re already playing.

 

Written by Jim Gill
Content Writer, Ipro

You Have a Choice When it Comes to Your Data, So Choose to Give it To Us

You Have a Choice When it Comes to Your Data, So Choose to Give it To Us: The Ongoing Dance Between Data Privacy and eDiscovery

We all love the idea of having control. Especially when it comes to the things that matter to us most. Things like privacy. And it’s that notion of control that tech giants like Google and Amazon are using in their latest announcements regarding user data. 

In a NY Times Op-Ed published in May, Google’s Sundar Pichai said, “Privacy is personal, which makes it even more vital for companies to give people clear, individual choices around how their data is used.” This along with a statement saying that Google believes the United States would benefit from GDPR-like legislation is no-doubt an attempt to reassure users. 

Amazon’s new Echo Show gives users the voice command “Alexa, delete everything I said today,” which deletes all voice commands from Amazon’s servers after midnight of that calendar day. In a few weeks, you’ll be able to use the command, “Alexa, forget what I just said,” to delete an individual command. But Amazon hasn’t said if these commands will delete metadata, and they won’t delete data shared in a transaction, like calling for a rideshare, ordering dinner, or purchasing something online. (In some ways, it begins to seem data is like energy in the First Law of Thermodynamics: it isn’t really destroyed, it only changes form).  

More than this though, is the idea that putting privacy in the hands of the user creates a false dichotomy: you have control on how tech companies use your data, but just letting them use your data makes the use of their products so much more functional and convenient. And as Lauren Goode from Wired said, when “tech companies have made it a choice between convenience and privacy, convenience will always win.” 

So, what does this mean for the LegalTech world? Any time data is involved, everything is at stake. Data is evidence, and the amount of electronically stored information (ESI) continues to grow at breakneck speeds. But, the same challenges apply regardless of how that ESI is created: where is it located and how can it be preserved, collected, reviewed, and produced for the courts in a timely, defensible, and cost-effective manner? 

When you throw privacy into the mix, it adds another layer. Who owns the data? Is it protected and under what guidelines and jurisdictions? 

Law tends to be a stolid, steadfast, and let’s face it, slow-moving entity; technology is always chasing what lies beyond the horizon. For investigators, attorneys, and other players in the eDiscovery world (lit-support, IT, paralegals, etc.) understanding the data landscape belonging to specific custodians involved in a case can be complex on its own. The need to understand the larger, digital world and how changes in both technology and policies surrounding the creation, ownership, and extraction of data become increasingly important in creating effective strategies for the courtroom.  

So the question begins, who uses Alexa, who uses Google and who decides to pass on both? 

Convenient or Creepy: The Smart Speakers are Listening

In a previous blog post, we talked about privacy as it relates to fitness wearable devices. To continue on the privacy vein, let’s examine smart speakers like Google Home, Alexa, and Siri. If you use any of these devices, you know the convenience associated with voice commands. Need the time? Alexa has it. Want to know some obscure movie fact? Ask Siri. Need to buy paper towels? Tell Google Home. Easy peasy, right? In today’s busy world of work, events, and to do lists, these devices lend a helpful hand, but at the price of consumer privacy.

Recently, both Google and Amazon filed patents to allow even broader abilities to listen in on consumer lives. Rather than waiting for a wake command as the speakers do now, the device would always be on and listening for keywords, but what is it doing with that information? According to both companies, the information would be used to target advertising based on the collected data. Say, you’re sitting around the dinner table discussing your next family vacation and then Alexa starts to recite travel deals to your target destination? How about Google Home listening to you discuss your medical issue with a friend and then offering related prescription ads? What about recognizing your mood, or an oncoming cold based on a sneeze, or an argument with your spouse? Google Home could even suggest parenting tactics based on its monitoring of your family interactions. You should spend more time with Susie, Google says. Do you want all of that in the cloud for advertisers to mill through? Is this level of technology cool or creepy?

While these assistants can be a source of convenience, we’d be remiss not to consider the potential consequences. Could information collected from smart speakers be used in a court of law? Are they discoverable? Could anything obtained be used for a conviction? Technology often moves faster than the law, but the law is catching up. Consider the Arkansas murder case where investigators wanted to use information gathered by Amazon’s smart speaker, Echo belonging to the suspect. Amazon jumped in citing First Amendment protections, an important step in setting a precedent for how this technology could be handled in the future. The suspect ended up granting permission for the data to be collected, but in future cases, consent and rights will need heavy consideration. For an innocent person accused of a crime or the unfortunate victim, the smoking gun, in this case, could be a home assistant device. But take a moment to consider the flip side. Could a conversation taken out of context lead to suspicion in the event something goes afoul, making it difficult to defend oneself? Do you really want your casual conversations used against you in a court of law?

As with much of current technology, consumers need to weigh the pros and cons of adopting smart devices. You must ask yourself the question is giving the device full access to your life, habits, interests, and vices worth the convenience? Where is the line and once we cross it, can we ever go back?

I always feel like somebody’s watching me. Hint- It’s your fitness tracker.

Cheesy 80’s song reference aside, privacy in the age of smart EVERYTHING is a serious concern. While it’s convenient, helpful, and even seductive to have all this technology at our fingertips, or wrists as it were, we would be remiss not to consider what we’re giving up in exchange.

Consider the situation recently experienced when a user discovered her entire jogging route was made public for strangers to view at their leisure. As a single woman who often ran in the early morning or evening hours in an urban setting, this is beyond alarming. Or, the unintentional national security risk unleashed when Strava published a global heat map that identified secret military outposts. Not good. Many fitness trackers have a social aspect that encourages users to interact with others, great for motivation and comradery. The tradeoff? You’re giving up your location information. It can also provide your full name and picture. It wouldn’t be a tough leap from there to figure out another user’s patterns, routes, workplace, and residence. That’s pretty high on the creepy scale.

Another common trend popping up is insurance and wellness programs encouraging the use of fitness wearables, often offering discounts or points for achieving specific activities. Many people are excited to sign up for these programs as a way to motivate themselves to live healthier lives. But what if the insurance company uses that data at a later date to determine you are not as healthy as they would like and raises your premiums? Or, the seemingly helpful suggestion of programs and tips to help you meet your goals but penalizes you if those goals are not met. What if you find your car insurance rates going up based on your driving habits derived from a fitness wearable? Is this still a good tradeoff for clocking your daily steps?

The law is paying attention too. Fitness trackers are being used to prove injury after an accident in the form of reduced activity, or in some cases, to prove insurance fraud when the tracker shows activity that doesn’t match the person’s injury claims. It can also be used to identify your whereabouts when a crime was committed- a potential alibi or smoking gun, as the case may be. In one situation, a woman was charged with making a false crime report after her Fitbit contradicted her timeline. With unclear guidelines on what is protected and isn’t, it seems the data from your wearable can and will be used against you in a court of law.

In response to these privacy concerns, some companies have made improvements. Fitbit, for example, voluntarily complied with HIPAA in order to partner with corporate wellness programs. Also, many companies have changed their default settings to opt-in rather than opt-out as was the case before. These are positive changes in a world where personal privacy is at a premium.

It’s clear wearables and other technologies such as Amazon Echo and Google Home are only increasing in popularity, so what can you do to protect yourself? You can start with one of the most obvious but often avoided tasks- read the fine print. Understand the privacy policy of the tech you’ve adopted and what the company can and cannot do with your data. Participating in a workplace wellness program is great for the culture, health benefits and friendly competition, but before handing over your wearable data, learn what the policy is and what it means to you. Understand where your information goes and what you can do if there is an adverse impact. Check your privacy settings on wearables with social components to ensure you’re only sharing information you’re comfortable with and do your research before buying to understand what you’re getting.

Once you’ve done all that, there’s only one thing left to do. Get up and kill that daily step goal.