Skip to content

HHH: Two Terrifying Technologies

2018 October 31
by Henry Stokes

Hi there, Henry here! In honor of this horrifying holiday, I’m highlighting a double dose of dreadful technologies sure to give you a scare. I couldn’t pick just one; both are bona fide bone-chillers, a couple current computer capabilities that will conclusively creep you out:


  1. Biometric verification
  2. Deepfake

1. Biometric verification,
what’s that?

This means using biological traits or measurements to help identify someone.  Fingerprinting is the analog version of biometric verification, but digital technology has developed to the point that we now have both voiceprinting and faceprinting.

There’s data (not gold) in them thar hills. Artificial Intelligence (AI) can recognize our unique voices, but it doesn’t just know our identity. There’s a lot of meaningful data, a scary amount of information, that can be figured out now from a person’s voice, such as:

  • age
  • health
  • emotional state
  • room density
  • what walls are made of
  • approximate location

And M.I.T. and others are figuring out how to replicate voiceprints.  Ponder that for a minute. Yikes.

(But be careful what you think: computer systems can now pick up what you say inside your head.)

There’s also data beneath the skin. Facial recognition is getting more and more ubiquitous. For the sake of convenience, all of the new iPhones released this year only have Face ID, with no Touch ID capability any more.  Voice or face authentication helps simplify access to buildings, resources, and services. At the library of the future, it could even be used to check out materials.

In China, face IDs are all the rage. Your face gets you a loan, you can smile into a camera to pay for something, and the police wear special smartglasses for ID recognition.  If you jaywalk in one place in China, your face is captured and connected to your social media identity, and then you get publicly shamed on giant billboards for all to see.



What will it mean that these digital IDs from your voice or face become necessary to function in a connected digital world?

2. Deepfake
, what’s that?

It’s now possible with the use of AI and deep machine learning to combine and superimpose existing images and videos onto source images or videos.  Sounds straightforward – but it has huge ramifications. What we think is our reality can be manipulated in an automatic manner right in front of our eyes. Seeing is no longer believing.

I think in this instance it’s better to show rather than tell. Here is an example of a deepfake video of Barack Obama being made to give an MLK speech:




AI can also fill in the gaps inside a photo or a video:



AI can watch a video and predict what will happen next:


And it can even, all by itself, go ahead and visually show a reconstruction of that prediction happening.

A positive use of this tech is AI monitoring security footage to detect anomalies.  Inside factories, for example, AI could notice and alert humans of machine faults and fatigued workers, contributing to efficiency and saving money and even lives.

To learn more about deepfake videos, watch this recent primer from the Wallstreet Journal:


Tech in Context:

What do digital ID systems and deepfake videos have in common? In the wrong hands – like those of criminals, hoaxers, scammers, and authoritarian regimes – they have the potential to be truly terrifying.

As a recent Wired article pointed out, digital ID systems are currently ripe for exploitation and abuse, threatening our freedoms and democracies. Ars Technica pointed out this month that  “Touch ID requires a physical, affirmative act of pressing a finger onto the scanner. But Face ID can be used from a few feet away, practically with just a furtive glance.” Where we used to be able to legally opt out of giving up our personal info locked inside our phones, we may now be easily forced to relinquish it by simply having looked up when our phone is placed in front of us.

Despite its many interesting and positive applications, AI’s new abilities to understand/predict/remake videos is understandably troubling, particularly in regard to hoaxes.  A deepfake video could be a form of identity theft – someone manipulating your likeness to make it look like you did something you didn’t do.


Library Role:

Should you as library workers be scared of these technologies?  Yes. But that doesn’t mean putting your head in the sand. Quite the opposite: it’s up to our profession to stay brave and keep our eyes wide open, doing all we can to better understand the potential pitfalls and dangers so we can protect our communities.  I see the library as part of the antidote to these techs’ terrors, a sort of Defense Against the Dark Arts.  It’s part of a library staffer’s job to inform patrons about privacy and security and to provide protection tools and resources.

As the Wired article urges, all must “advocate for the principles of data minimization, decentralization, consent, and limited access that reinforce our fundamental rights.” It falls especially to libraries to help voice this and perform the role of stewards and guides.

Libraries fight against misinformation, for free speech and other patron rights and protections.  The deepfake videos have the potential to add more disinformation and untrustworthy media into the world.

But what’s new about that? These are age-old problems for librarians and archivists.

I personally think we got this.

To leave you with, here’s a fun use of deepfake video technology.  UC Berkeley recently published this research, cleverly entitled, “Everybody Dance Now“, which actually shows how everybody will, well, be able to dance now.  On video, anyway…




Be Sociable, Share!
One Response leave one →
  1. Christy Brightwell permalink
    November 3, 2018

    Hi Henry,

    I attended your Escape the Room program at annual assembly this summer(I enjoyed it). This is a “scary” post. I am interested in both of these technologies as solutions for library patrons and internet users in general. Do you know of any steps users and librarians can take in addition to literacy instruction and cyber security measures already in place to prevent issues that your post discusses?

    I am an optimist so I think these problems are solvable somehow, aren’t there usually patches that IT and security software that solve these problems?

    Just curious.

    C Brightwell

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS