Over the past few years, the use of biometrics – unique physical characteristics that can identify individuals – has exploded. This is especially true for facial recognition technology, which you might use to unlock your phone, tag your friends in Facebook photos, or log in to your computer.
But with this useful technology comes rising concerns about stolen data, errors, lack of oversight, and even terrorism plots. In today’s article, we’ll take a closer look at the impact facial-recognition technology has on our privacy and what we can do to help protect ourselves in today’s digital era.
The rise and fall of FaceApp
In July of 2019, everyone went nuts for FaceApp, a program that uses AI to show you how you might look when you’re old. After celebrities and influencers shared their results, the app exploded in popularity, hitting the top of both Apple iTunes and Google App Store charts.
And then the backlash started.
Twitter users began pointing out that the company that makes the app is based in Russia and uses vague and ominous language in its terms and conditions, which contains sentences like:
“You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content.”
Shortly after, rumors started swirling that FaceApp is actually Russian spyware that has the ability to offload your entire camera roll for use in some kind of facial recognition database.
Bob Lord, the chief security officer of the Democratic National Committee, sent an email alert to 2020 presidential campaigns, warning them not to use FaceApp: “It’s not clear at this point what the privacy risks are, but what is clear is that the benefits of avoiding the app outweigh the risks."
Sen. Chuck Schumer posted an open letter to Twitter on July 18, urging the FBI and FTC to investigate the app. "It would be deeply troubling if the sensitive personal information of U.S. citizens was provided to a hostile foreign power actively engaged in cyber hostilities in the United States," wrote Schumer.
What caused the panic?
The ensuing panic may have been a result of two critical factors.
First, the app is Russian, and Americans tend to be cautious of Russia for a variety of reasons. Some are historical, like the Cold War. Some are more recent, like the country’s interference with U.S. elections.
Second, the terms and conditions of FaceApp really are scary. They’re incredibly broad, and the legalese is so thick, it’s hard to read and truly know what the repercussions are. Though, at the end of the day, they’re probably not that different from the terms of other apps. Dropbox has been labeled “hostile towards privacy.” As recent as two years ago, Google was reading Gmail users’ emails to display personalized ads. Social media juggernauts can use your accounts for advertising purposes or sell your data to third-party companies.
Instead, the main difference between FaceApp’s terms and those of other apps may be that we were concerned enough to actually read FaceApp’s. Most users simply hit “accept” without even scanning the legal document. In fact, InfoArmor’s Data Privacy and Consumer Expectations report found that only 30 percent of people regularly read the privacy policies on apps they use.
How did FaceApp respond?
Yaroslav Goncharov, the CEO of the company that developed FaceApp, denies that his product is being used for anything bad. In an email to the Denver Post, he said his company will “only upload a photo selected by a user for editing.” He also denied that Russian authorities have any access to photos uploaded to the app, and insisted “most” images are deleted from the apps’ servers after 48 hours.
The impact of facial recognition
Facial recognition is often used to make everyday tasks more convenient. It’s also used to catch criminals — without much regulatory oversight.
According to a 2016 report from the Center on Privacy and Technology at Georgetown Law, the photos of more than 117 million Americans are in databases available for law enforcement authorities to run facial recognition on. Unfortunately, the same report found that less than 10 percent of the agencies that use facial recognition technology have a “publicly available use policy.” This means the public has no idea how agencies are using the technology, whether or not it’s effective, and if the system is being abused.
Private entities are using it to monitor people, too. Like Taylor Swift, who uses facial recognition cameras to search for stalkers at her concerts.
There are larger concerns about what the widespread use of facial recognition technology means for our individual privacy.
With Amazon, Google, and Microsoft all developing their own software, it seems likely the use of facial recognition software will increase in the near future. Unlike other identification methods, like a fingerprint scan or an ID card, facial recognition can be used on people without them noticing or, crucially, giving their consent.
We’re now surrounded by cameras almost every time we’re in public. What would happen if those cameras were connected to facial recognition software with an extensive database of faces? It would, essentially, make it impossible to go anywhere without being tracked.
China, in particular, has been accused of putting the technology to some fairly terrifying big brother-ish uses. Most notably, the government allegedly used it as a tool in tracking Uighurs, the country’s persecuted Muslim minority, many of whom have allegedly been placed in internment camps. The country has also equipped some police with facial recognition glasses.
Another major concern is the accuracy of the technology. While some systems are thought to work extremely well, others…not so much. According to a 2018 report by the privacy watchdog Big Brother Watch, attempts to use facial recognition by the Metropolitan Police in the UK have only had a two percent accuracy rate.
Worryingly, inaccurate readings may be more likely to affect some groups of people than others. A recent test on Idemia, a facial recognition program widely used by law enforcement agencies, found that black women may be ten times more likely to receive a false match than white women.
Which is to say nothing of potential security issues. As with any kind of data storage, facial recognition databases may be at risk of being breached. In fact, cybercriminals have already stolen travelers’ photos from the U.S. Customs and Border Protection.
You might not have anything to worry about right now, but the problem is, we have no idea how facial recognition technology will continue to develop.
As a sort of worst-case scenario, a think tank out of Berkeley made a presentation in 2017 called Slaughterbots, which hypothesized that, in the near future, it would be entirely possible for a terrorist to create a weaponized drone that could use facial recognition technology to locate and assassinate anyone they wanted.
Hopefully, this isn’t happening any time soon. But the fact that it’s possible should make us at least a little worried about where photos of your face are going.
Is anyone doing anything to stop the spread of facial recognition?
Yes. San Francisco, Oakland, and Somerville, Massachusetts, have banned their city’s agencies from using facial recognition technology. There is also a bill in motion in the House of Representatives that would ban its use in public housing.
Fittingly, the Land of Lincoln is also leading the charge in protecting our freedoms. The Electronic Frontier Foundation calls the Illinois Biometric Information Privacy Act of 2008 (BIPA) “one of our nation’s most important privacy safeguards for ordinary people against corporations that want to harvest and monetize their personal information.”
A core component of the law is that it bars organizations from collecting, using, or sharing a citizen’s biometric information without his or her informed opt-in consent. The “opt-in” part is especially important.
Take Facebook, for example. Nearly a decade ago, the social media juggernaut rolled out its Tag Suggestions feature, which uses facial-recognition technology to cross-reference faces in newly uploaded photos with the company's extensive catalog of “known faces.” If there’s a match, Facebook allows users to automatically tag the recognized faces.
This is troubling for many reasons. However, the biggest issue is that the implementation of the tagging feature occurred without a user’s express consent. Instead of requiring users to opt-in, people must instead opt-out –– a direct violation of BIPA.
This created a few small waves back in 2010 when Tag Suggestions first launched. It did so again in 2015 when Illinois citizens filed a class-action lawsuit against Facebook, known as Patel v Facebook. Facebook tried to dismiss the case, but the trial court denied this motion and certified a class of Facebook users. This decision was upheld on August 8, 2019, when a three-judge panel unanimously agreed the Patel plaintiffs have constitutional grounds to sue Facebook for these privacy violations.
Then, on September 3, 2019, Facebook announced it would turn off all facial recognition features by default but only for new accounts. The billions of existing Facebook users will still have to manually opt-out of Tag Suggestions and other facial recognition features. For information on how you can do this, check out this article.
How can you protect your privacy? (no mask required)
Of course, it’s not practical to put on a mask every time you spot a camera. So, what can you do? Here are a few tips you can use to live a more private life.
#1 Review an app’s privacy policy before using it
Take time to actually read the privacy policies of the apps and programs you use. We realize they’re boring, but they’re also super important. A good privacy policy will:
Describe the types of information that’s collected and outline how that data is used
Disclose how information is gathered and stored
Identify any third parties or organizations that might have access to your information
Outline the available privacy choices, with instructions on how to opt out of information sharing — and the consequences of doing so
For more tips, you can check out our article, A Beginner’s Guide to Understanding Privacy Policies.
#2 Reduce your digital footprint
Every time you sign up for a newsletter, post to social media, or make an online purchase, you leave behind a trail of your online activity. This is known as your digital footprint, and you should take every possible measure to protect it.
Here are a few simple steps you can follow to live a more private life online:
Limit the data you share
Before you hit subscribe, fill out a form, or post to Instagram, ask yourself is the payoff really worth it? Consider what you’ll receive for the exchange of your personal information. If it doesn’t feel like an equal trade, pass.
Delete your unused accounts
The more items in your digital footprint, the more at risk you are of a potential data breach. So, if you have an old social media account you no longer use or you’re subscribed to a blog you don’t read, consider deleting your account.
Avoid unsafe websites
If a website doesn’t begin with HTTPS — where the “S” stands for “secure” — you should go elsewhere. At the bare minimum, never share any confidential information on unsecured sites, especially payment details.
While there’s a great deal of uncertainty surrounding the future of facial recognition, one thing is crystal clear: we can’t wait for others to protect our privacy. Every individual has an obligation to safeguard their identity as much as possible in today’s digital era.