Tag Archives: privacy

Fitbit Data Provides Clues in Murder Case: eDiscovery & Criminal Investigation

ediscovery criminal investigation

Once again, eDiscovery and emerging data sources are at the center of a criminal murder investigation. A recent article in Wired highlights how investigators used data from the victim’s Fitbit and a neighbor’s Ring digital doorbell camera to establish a timeline, identify a suspect (the victim’s 92 year-old step father), and gain a warrant to search the suspect’s home, all of which led to his arrest.

Similar to other cases involving the use of data from mobile phones, social media, and the Internet of Things, it wasn’t the electronically stored information (ESI) alone leading to the arrest, but instead it gave investigators leads that otherwise wouldn’t have existed in a pre-digital world. These leads then gave them enough evidence to request warrants for further searches and investigations of specific suspects, which then brought about arrests. In other words, solving crimes still boils down to good old-fashioned detective work, only now, investigators have new ways to uncover what might have happened and who may have been involved.

Fitbit first introduced its personal fitness tracking device ten years ago, and today around 27 million people use them. Add that to other competitor’s devices (such as the Apple Watch) and it’s not hard to imagine how investigators often have ready access to biometric data of suspects and/or victims. Last year alone, 170 million wearables were shipped worldwide.

But while much of electronic data is self-authenticating under Federal Rules of Evidence rule 902(14), biometric data gathered from wearable devices is much harder to authenticate as accurate. An analysis of 67 studies on Fitbit’s movement tracking concluded that, “the device worked best on able-bodied adults walking at typical speeds. Even then, the devices weren’t perfect—they got within 10 percent of the actual number of steps a person took half of the time—and became even less accurate in counting steps when someone was resting their wrist on a walker or stroller, for example. ‘It’s not measuring actual behavior,’ says Lynne Feehan, a clinical associate professor at the University of British Columbia and the lead researcher on the paper. ‘It’s interpreting motion.’”

Evidence from fitness trackers has been admitted in homicide cases in Europe and the US, but expert witnesses and analysts are often used in conjunction with the data to authenticate it. Only a few judges have ruled on how to handle evidence from fitness trackers. For example, “In a 2016 Wisconsin case, Fitbit data was used to eliminate the possibility that a woman was murdered by her live-in boyfriend. The judge ruled that an affidavit from Fitbit established the device’s authenticity and allowed lawyers to introduce its step-counting data; at trial, a sheriff’s department analyst vouched for the reliability of the man’s particular device. However, the judge barred the Fitbit’s sleep data, citing a class-action suit that claims the sleep tracking could be off by as much as 45 minutes.”

Similar to older technologies (such as the polygraph), electronically created data isn’t necessarily irrefutable. Antigone Peyton, an intellectual property and technology law attorney who has used data from wearables in civil cases, states that people tend to see “data is equivalent to truth,” but there are “many ways the information on these devices can be interpreted.”

Another aspect that investigators have to consider with this type of data is users’ privacy. Last year, the Supreme Court ruled that police must have a warrant to search phone location data under the 4th Amendment. At times, the companies that have the data, refuse to hand it over to law enforcement if they feel privacy is being infringed upon, but largely, the tech industry seems to cooperate, especially as procedures for handling this type of data are becoming more defined.

What is important for the legal industry to consider is how different types of data and data sources work, and what that means when it comes to authenticating it. It’s similar to when wiretaps, phone data, and other electronic surveillance was introduced into the detective’s toolkit in the 20th century. The difference is the amount of information created today is much greater, and it’s created by every person on a near continuous basis from a large number of sources. And none of this data can be looked at in the same way.

But what this new data does provide is more ways to reconstruct scenes, rule out potential suspects, corroborate or contradict testimony, and gather information which allows investigators to pursue new tactics and lines of questioning which lead to further warrants and arrests. For attorneys, it’s important to stay up on the most recent investigative uses and court-rulings on these data sources, which will continue to define and clarify how ESI is being used in both criminal and civil cases.

 

Written by Jim Gill
Content Writer, Ipro

 

Need to brush up on your Federal Rules of Civil Procedure as they apply to eDiscovery?
Download Ipro’s FRCP Cheat Sheet!

It’s Not Just About the Money (or Privacy): The Role of Specificity, Technology, and FRCP Rule 26

FRCP Rule 26

What Does FRCP Rule 26 Say about Scope and Proportionality?

In 2015, when the Federal Rules of Civil Procedure were amended, the issue of scope and Rule 26 was a hot topic of discussion, mainly around the issue of costs. But proportionality doesn’t just apply to the cost of discovery. With concerns around privacy becoming a daily headline due to data breaches, privacy laws, and the use of personal data by large corporations and governments, will the cry of “privacy” take the place of “burdensome costs” in proportionality rulings?

Before the 2015 amendments, it was common that broad discovery requests were submitted, which if carried out, would end up costing way more than the lawsuit was worth in the first place. For organizations with deep pockets, large discovery requests became a tactic similar to continuing to raise the bet in a poker hand until your opponent had no choice but to fold.

But since 2015, broad discovery requests or “fishing expeditions,” have been essentially banned. With the new rule, the burden to prove proportionality lies with both the requesting and responding parties. And the first two questions that should be answered are:

  • Is the information requested relevant to the outcome of the case?
  • Is the information privileged?

If the data in question passes these two tests (yes, it’s relevant to the case, and no, it’s not privileged information) then the courts look at the following six factors laid out in FRCP Rule 26(b)(1) to help determine rulings on proportionality.

  • The importance of the issues at stake
  • The amount of information in controversy
  • The parties’ access to the information in question
  • The parties’ resources to obtain the information
  • The importance of the discovery in resolving the issues
  • Whether the burden or expense of the proposed discovery outweighs its likely benefit

 

How Does Privacy Fit into the Discussion of Scope and Proportionality?

Henson v. Turn, Inc (US Court, Northern Dist. California, 10/22/2018) is a fairly recent case that deals specifically with proportionality and privacy. In it, the plaintiffs brought a class action against the defendant, claiming that the defendant engaged in the practice of using “zombie cookies” which users cannot delete, block, or opt out.

In response, the defendant requested the plaintiffs:

  • Produce their mobile devices for inspection or produce complete forensic images of their devices
  • Produce their full web browsing histories from their devices
  • Produce all cookies stored on or deleted from their devices

The court ruled that the defendant’s request to directly inspect the plaintiffs’ mobile devices or for complete forensic images of the devices “threatens to sweep in documents and information that are not relevant to the issues in this case, such as the plaintiffs’ private text messages, emails, contact lists, and photographs.”

And because the parties had protocols in place for producing information from the plaintiffs’ devices or forensic images, the defendant issued nine requests for specific information from the plaintiffs’ devices, which the plaintiffs carried out.

The same happened with the request for the browsing histories and cookies. The plaintiffs produced or offered to produce their web browsing history and cookies associated with the defendant’s partner websites and the date fields of all other cookies on their mobile devices. The plaintiffs also offered to meet and confer with the defendant to consider requests for specific cookies.

And the court ruled with the plaintiff.

 

What is the Role of Technology in Scope and Proportionality?

So why does this matter? The key here is not about scope and proportionality or even privacy. Yes, that’s the topic of the case. But the bigger issue at stake is how will the creators of legal technology respond. With Rule 26, it’s all about specificity. I want this specific data, from that specific custodian, from these specific date ranges, because it affects the case in this way. After that, it’s just an issue of having the tools to get those specific items easily and cost effectively.

In Henson v. Turn, the judge cited a case from 2006 (Sony BMG Music v. Arellanes), where a request was made for an imaging of an entire hard drive, and it was determined that the production would reveal irrelevant data, when all that was needed were specific emails. Now we have technology which allows us to target specific emails and other data on a computer. We can deNist and deDupe, we can redact, we can do all kinds of things within our eDiscovery tools which keep data within the scope and proportionality of a request. It wasn’t always so. It took the creators and innovators of technology to make it a relatively easy and standardized process.

This technology made the cries of “overburdensome discovery” seem moot. No discovery is overburdensome these days when you can pinpoint the exact data that is relevant in the case. You just have to ask for it. With the onus on the requester to follow the guidelines of FRCP Rule 26, if you make a request that’s overburdensome, you’re just being lazy. And judges aren’t having it.

With this case’s highlighting of the role of privacy in Rule 26, I think leaders in the eDiscovery industry should be looking ahead in the same way that at least some of them were in 2006. How can we create tools that allow the handling of electronic data through the entire litigation process? Only now, instead of hard-drives full of emails and word documents, it’s data from a number of unique sources that live across platforms available on mobile devices and the Internet of Things.

The guidelines for proportionality and scope are very clearly laid out in the FRCP. The only difference is the need for tools that make the process easier considering the digital landscape that exists in 2019, not just the one in 2006.

 

Written by Jim Gill
Content Writer, Ipro

 

Where Does eDiscovery Fit in the Facial Recognition Conversation?

ediscovery facial recognition

Where Does eDiscovery Fit in the Facial Recognition Conversation?

For most of us, the concept of facial recognition – like so much technology of the last decade – began as a sci-fi detail we accepted on the big screen but didn’t give much thought to in our day-to-day lives. Then one day, our phones started tagging photos automatically, asking, almost sheepishly, “Is this you?” And just like that, the idea that an algorithm could learn to recognize our faces was real. But we’ve moved on from that innocuous beginning and now are treading more and more into the realm of another type of film (I’m thinking here of Brazil or Minority Report) where technology aids law enforcement in making our world a safer place, but also introduces new privacy and ethics conundrums when that technology fails or is corrupted. And, as always, eDiscovery has a place where legal and tech (in this instance law-enforcement and facial recognition) collide.

Law Enforcement, Facial Recognition, and Privacy Concerns

In a Washington Post article in July 2019, it was revealed that agents with the Federal Bureau of Investigation and Immigration and Customs Enforcement were using facial recognition software to scan state driver’s license databases, analyzing millions of Americans’ photos without their knowledge or consent.

In the past, police have used fingerprints, DNA and other “biometric data” collected from criminal suspects (think of those large binders of mugshots you always see victims flipping through in cop shows), but the photos in DMV records are of a state’s residents, most of whom have never been charged with a crime.

According to the Government Accountability Office, the FBI has logged more than 390,000 facial-recognition searches of federal and local databases, including state DMV databases, since 2011. Even though neither Congress nor state legislatures have authorized the development of such a system, and now lawmakers on both sides of the political spectrum are concerned.

“Law enforcement’s access of state databases,” House Oversight Committee Chairman Elijah E. Cummings (D-Md.) said is “often done in the shadows with no consent.” And Rep. Jim Jordan (Ohio), the House Oversight Committee’s ranking Republican, seemed particularly incensed during a hearing into the technology earlier this year.

“They’ve just given access to that to the FBI,” he said. “No individual signed off on that when they renewed their driver’s license, got their driver’s licenses. They didn’t sign any waiver saying, ‘Oh, it’s okay to turn my information, my photo, over to the FBI.’ No elected officials voted for that to happen.”

Off-The-Shelf Facial Recognition Tools for Law Enforcement

In Oregon, back in 2017, the Washington County Sheriff’s Office became the first law enforcement agency in the country known to use Amazon’s artificial-intelligence tool Rekognition, and almost overnight, the deputies of this small county in the suburbs outside of Portland had ramped up their investigative ability. With this off-the-shelf technology, they were able to scan for matches of a suspect’s face across more than 300,000 mug shots taken at the county jail since 2001. With that information, they can take a picture – perhaps captured by a security camera, social-media account, or cellphone – and link it to an identity.

But linking a photo to previous mug shots is analogous to the same action that happened prior to the addition of technology. It’s something detectives used to do manually but now are assisted with the use of AI. Still, there are significant problems with the technology itself.

Facial Recognition Bans Due to Concerns Ranging from Privacy to Racial Equity

Some places have already taken measures to ban face recognition software. In 2019, San Francisco became the first city to ban the technology for law enforcement and government agencies. Similar measures are under consideration in Oakland and Massachusetts. Lawmakers in California are also considering a statewide ban on facial recognition programs.

To add to that list, the largest manufacturer of police body cameras, Axon, is rejecting the possibility of selling facial recognition technology at the recommendation of an independent ethics board which it created last year after acquiring two artificial intelligence companies.

In a 42-page report, the ethics panel found that face recognition technology is not advanced enough for law enforcement to depend on, with concerns ranging from “privacy costs to racial equity.” In fact, the technology was found to be less accurate in identifying the faces of women than men, and younger people compared to older ones. The same was true in people of color, who were harder to correctly identify than white people.

But the lack of regulation or precedent means that, while the technology is being banned in some places, it’s being pursued in others. For instance, Detroit reportedly signed a $1 million deal for software that let it continuously monitor “hundreds of private and public cameras set up around the city,” including gas stations, restaurants, churches and schools, according to the New York Times.

Facial Recognition and eDiscovery

So where does facial recognition fit in eDiscovery? As with any emerging technology, it’s worthwhile for those of us in legaltech to stay abreast of these changes. For one, any new technology is potential ESI that must be discoverable should a criminal or civil case arise in which that electronic data is evidence. Often, people in legal circles might argue, “That hasn’t happened yet, so we’ll worry about it later.” Then again, people a few decades ago might find it unbelievable that data from a phone app (“phone app?” they might add with a puzzled look) meant for requesting rides from a stranger would be used in a nationally covered murder case.

But there is also the notion that facial recognition tools might be used to help attorneys and litigation support specialists do their work. In fact, the company Veritone recently announced an AI eDiscovery tool that makes unstructured data searchable by keywords, faces, and objects.

As technology continues to forge ahead, it’s important that the legal world engages in the conversations surrounding these technologies, before they find themselves deeper in the state of catch up many say they’re already playing.

 

Written by Jim Gill
Content Writer, Ipro

You Have a Choice When it Comes to Your Data, So Choose to Give it To Us

You Have a Choice When it Comes to Your Data, So Choose to Give it To Us: The Ongoing Dance Between Data Privacy and eDiscovery

We all love the idea of having control. Especially when it comes to the things that matter to us most. Things like privacy. And it’s that notion of control that tech giants like Google and Amazon are using in their latest announcements regarding user data. 

In a NY Times Op-Ed published in May, Google’s Sundar Pichai said, “Privacy is personal, which makes it even more vital for companies to give people clear, individual choices around how their data is used.” This along with a statement saying that Google believes the United States would benefit from GDPR-like legislation is no-doubt an attempt to reassure users. 

Amazon’s new Echo Show gives users the voice command “Alexa, delete everything I said today,” which deletes all voice commands from Amazon’s servers after midnight of that calendar day. In a few weeks, you’ll be able to use the command, “Alexa, forget what I just said,” to delete an individual command. But Amazon hasn’t said if these commands will delete metadata, and they won’t delete data shared in a transaction, like calling for a rideshare, ordering dinner, or purchasing something online. (In some ways, it begins to seem data is like energy in the First Law of Thermodynamics: it isn’t really destroyed, it only changes form).  

More than this though, is the idea that putting privacy in the hands of the user creates a false dichotomy: you have control on how tech companies use your data, but just letting them use your data makes the use of their products so much more functional and convenient. And as Lauren Goode from Wired said, when “tech companies have made it a choice between convenience and privacy, convenience will always win.” 

So, what does this mean for the LegalTech world? Any time data is involved, everything is at stake. Data is evidence, and the amount of electronically stored information (ESI) continues to grow at breakneck speeds. But, the same challenges apply regardless of how that ESI is created: where is it located and how can it be preserved, collected, reviewed, and produced for the courts in a timely, defensible, and cost-effective manner? 

When you throw privacy into the mix, it adds another layer. Who owns the data? Is it protected and under what guidelines and jurisdictions? 

Law tends to be a stolid, steadfast, and let’s face it, slow-moving entity; technology is always chasing what lies beyond the horizon. For investigators, attorneys, and other players in the eDiscovery world (lit-support, IT, paralegals, etc.) understanding the data landscape belonging to specific custodians involved in a case can be complex on its own. The need to understand the larger, digital world and how changes in both technology and policies surrounding the creation, ownership, and extraction of data become increasingly important in creating effective strategies for the courtroom.  

So the question begins, who uses Alexa, who uses Google and who decides to pass on both? 

Convenient or Creepy: The Smart Speakers are Listening

In a previous blog post, we talked about privacy as it relates to fitness wearable devices. To continue on the privacy vein, let’s examine smart speakers like Google Home, Alexa, and Siri. If you use any of these devices, you know the convenience associated with voice commands. Need the time? Alexa has it. Want to know some obscure movie fact? Ask Siri. Need to buy paper towels? Tell Google Home. Easy peasy, right? In today’s busy world of work, events, and to do lists, these devices lend a helpful hand, but at the price of consumer privacy.

Recently, both Google and Amazon filed patents to allow even broader abilities to listen in on consumer lives. Rather than waiting for a wake command as the speakers do now, the device would always be on and listening for keywords, but what is it doing with that information? According to both companies, the information would be used to target advertising based on the collected data. Say, you’re sitting around the dinner table discussing your next family vacation and then Alexa starts to recite travel deals to your target destination? How about Google Home listening to you discuss your medical issue with a friend and then offering related prescription ads? What about recognizing your mood, or an oncoming cold based on a sneeze, or an argument with your spouse? Google Home could even suggest parenting tactics based on its monitoring of your family interactions. You should spend more time with Susie, Google says. Do you want all of that in the cloud for advertisers to mill through? Is this level of technology cool or creepy?

While these assistants can be a source of convenience, we’d be remiss not to consider the potential consequences. Could information collected from smart speakers be used in a court of law? Are they discoverable? Could anything obtained be used for a conviction? Technology often moves faster than the law, but the law is catching up. Consider the Arkansas murder case where investigators wanted to use information gathered by Amazon’s smart speaker, Echo belonging to the suspect. Amazon jumped in citing First Amendment protections, an important step in setting a precedent for how this technology could be handled in the future. The suspect ended up granting permission for the data to be collected, but in future cases, consent and rights will need heavy consideration. For an innocent person accused of a crime or the unfortunate victim, the smoking gun, in this case, could be a home assistant device. But take a moment to consider the flip side. Could a conversation taken out of context lead to suspicion in the event something goes afoul, making it difficult to defend oneself? Do you really want your casual conversations used against you in a court of law?

As with much of current technology, consumers need to weigh the pros and cons of adopting smart devices. You must ask yourself the question is giving the device full access to your life, habits, interests, and vices worth the convenience? Where is the line and once we cross it, can we ever go back?

How Secure is your (Technology) World?

We’ve all seen the headlines of breach after breach of big corporations, such as Target and Yahoo. Perhaps the most alarming was the Equifax breach in late 2017 affecting 145M customers, the full impact of which is still yet to be determined. Rightfully so, cyber attacks like this keep our personal and professional security top of mind.

As legal industry professionals, the security of sensitive data is critical. We all know the confidentiality of document review for eDiscovery. A breach of information could potentially cause irreparable damage. Imagine hundreds of privileged documents for a high-profile client suddenly becoming public fodder (think, Sony’s epic email breach). Or the juicy details of a divorce proceeding, ahem, Tiger Woods, or even the strategy email exchanges between you and your client. Needless to say, these scenarios would produce a devastating outcome.

So, what can we do to protect data in this age of cyber hackers? One major consideration is to move to the Cloud. Cloud-based technology offers a variety of defenses against electronic breaches of information primarily by utilizing robust auditing and encryption for sensitive data. In the eDiscovery world, using a cloud-based review tool not only gives you flexibility and contributes to a more efficient review, but it provides the necessary security to ensure the data in your care is safe.

Now that 2018 is under way and the looming compliance deadline of the General Data Protection Regulation (GDPR) approaches, the responsibility to ensure adequate privacy and security has never been more critical than now, especially if your needs require data-sharing with large corporations at risk for security breaches.

On a personal level, consider storing important and sensitive documents in the Cloud. Avoid the hacker hangout of public WiFi. Use complex password conventions. Be leery of unsolicited emails asking for private information or links to click. Use two-factor authentication whenever available. Keep your computer systems updated. Finally, monitor, monitor, monitor- your bank accounts, credit accounts, and credit reports. Vigilance, both personally and professionally, is the best defense against a cyber attack.

For more information about Ipro’s cloud-based technology, click here.