HHH: Persistent Recognition Systems

Logo for Henry's Hightech Highlights

You should be aware of a new technology that is increasingly aware of you.

Today’s highlight… Persistent Recognition Systems

What is it?

All around us are smart devices that monitor us in real-time, all the time. There are our cameras, tablets, and the smart phones in our pockets, of course. But there are also home speakers like Amazon’s Alexa and Google’s Home, plus doorbell cameras and security systems, cars, refrigerators, watches and other wearables – all possessing persistent recognition. These systems are getting installed in our day-to-day spaces at a constant pace. Analysts expect 75% of U.S. households will have smart speakers by 2025.

It’s not just the monitoring. Combined with Artificial Intelligence (AI), these devices can know personal things about us. Some can even tell whether we’re sick or angry. Your accent can help them determine which country you’re originally from. They can take in background noise and make deductions like super-powered private detectives. All of this serves the function of targeting consumers for marketing purposes. Alexa hears crying in the background, for example, and soon Amazon.com starts suggesting baby products for you to buy.

Animated cartoon of a smart speaker with human ears and menacing eyes, swiveling back and forth listening to what everyone is saying.
Illustration: Erik Blad for The Intercept

What’s special here is that we are persistently being recorded and our data recognized. These devices are always on, always listening; the data that’s collected is uploaded and stored in the Cloud. Our data is being mined in our homes and offices. And in the near future, it won’t just be the external world keeping its ears open to our data, but the inner world as well (literally inside our ears in some cases). We will have our internal states recorded and analyzed via sensors in hearables, injectables, etc.

What is it good for?

It’s easy to see why one would be concerned about persistent recognition systems for their threat to privacy and their potential use by data-controlling authoritarian governments who could bring about surveillance states. It’s natural to be wary.

But what about the amazing good this technology could bring? One of my science heroes is Poppy Crum, PhD Neuroscientist and Technologist, Chief Scientist at Dolby Laboratories at Stanford. I’ve had the privilege of seeing her speak a couple of times at the SXSW Interactive Festival, and in the last talk I saw, she did point out that, indeed, technology will know more about us than we know about ourselves. But she argues this doesn’t have to be a bad thing!

“Increased tracking and ubiquitous sensing can improve care and quality of life and mean greater autonomy and freedom.” – Poppy Crum

Crum started her presentation by sharing this quote from 1943:

"Most of the greatest advances of modern technology have been instruments which extend the scope of our sense organs, our brains, and limbs. Such are telescopes and microscopes, wireless calculating machines, motor-cars, ships and airplanes." - K.J.W. Craik, 1943

What if there is a natural progression from telescopes to airplanes to persistent recognition systems? With this new technology, combined with AI, we can continue to extend our scope. Crum asks us to imagine all the many powerful and transformative benefits these systems can provide.

Here are a few of the ways we can extend the scope of our senses:

Emotion – Devices that pick up our emotions might, for example, automatically play us soothing music in the car to prevent road rage; they might know when we’re grieving and eliminate the ads we see. If they know we’re having positive feelings, they could suggest ways to prolong or recreate them later. Or help us better interact more effectively with others.

Breath – Devices that track our breath can help us improve our state of mind. If we know that we’re not breathing well, we can consciously control it and calm ourselves to reduce stress. This will also improve our heart rate, blood pressure and circulation. Having this tool in our arsenal will help us feel better and lead healthier lives.

Gaze – Devices which know where we’re looking will know what we’re interested in. And they might give us an extra way to interact with content. Say you want to turn the television on, for example. Just stare at the tv screen, and voila! – who needs the remote control? Want to pick which youtube video to watch? No need for the mouse click: just linger your gaze a bit longer on ‘cat playing piano’ and you’ll be taken straight to the hilarity.

Inner Ear – A lot of data can be mined from inside our ears to help understand our internal states. And a lot can be done with persistent recognition systems and AI in our ears to help create personalized experiences for us in the context of our lives. Via tracking, these devices will learn from the environment for us in order to provide a tailored experience at the point of need. If your hearable recognizes from your internal state that the noise at a party is overwhelming you, it can alter the volume within your ears to help you have a better experience – or, isolate only the voices from the people you are conversing with. Crum envisions a future where people will embrace this empowering hearable tech:

“Like fashion eyewear, hearing aids may become a style choice.”

– Poppy Crum

Subvocal – Check out this conceptual video from MIT for a device that picks up your subtle internal subvocal movements in order to communicate with a computer. This is great for protecting our privacy when we may not want to use voice recognition in public.

Spaces – Once these systems are embedded throughout our physical environment, they may be able to communicate with one another and adjust the space itself for individual users. Imagine assisted living places where a smart room with persistent recognition could help folks perform regular tasks they may find difficult, such as automatically lowering or raising the blinds. Imagine an elevator which takes the data from a person with physical impairments and automatically selects the right button for them. This tech has the capacity to eliminate barriers to access, creating an adaptive environment for everyone – especially people with disabilities.

Memory – A device that tracks and stores data and is capable of reminding you of things has the potential to aid a lot of people. Those who live with memory loss issues reportedly love Alexa-style services. They can keep asking for what the day is twenty times a day and still get the correct answer each time, without judgement.

Crum points out we can see these devices as extended partners, not assistants. The concept of one-size-fits-all technology, which is what we have now and have always had, will be a thing of the past. The next generation will wonder in amazement how we all had to use the same cookie-cutter tech.

Crum sees the future as the ‘era of the empath‘. If our technology can know how we’re feeling – measure our eye dilation, heat signatures, the amount of carbon dioxide in our breath – and determine from that data whether we’re lying, in love, feeling lousy, etc, then it means “we can bridge the emotional divide.” Crum predicts it’s the end of the poker face. “We get a chance to reach in and connect to the experience and sentiments that are fundamental to us as humans in our senses, emotionally and socially.” Examples she gives are a high school counselor being able to know whether a seemingly cheery student is actually having a hard time, or an artist able to find out exactly how her work effects people emotionally.

Another exciting possibility is in the field of healthcare and the potential of these systems to diagnose diseases. This technology can differentiate coughs and sneezes from other background noises so could discern if we’re ill and suggest solutions. If our speech patterns and body movements are being collected through persistent recognition, AI might use that data to determine whether we are developing early signs of diabetes, multiple sclerosis, bipolar disorder, or Parkinson’s, and warn us. Using real-world labeled 911 audio of cardiac arrests, researchers trained the AI in smart devices to accurately classify agonal breathing instances – an early warning sign of cardiac arrests. Thus, the AI on the other end of the 911 call (or even in the smart speaker in your house) might know you are having a heart attack before a human dispatcher does.

How do libraries fit in?

Libraries are well-positioned to educate their communities about this emerging technology. Some libraries have partnered with voice assistants to create apps within them for their patrons specifically for using the library’s services. Others are teaching classes on how to set them up, or publishing helpful FAQs.

Library staff should be aware of the potential of persistent recognition systems, both good and bad. There are the very serious privacy and security concerns to keep up with and inform patrons about. And there are the beneficial uses that Crum envisions. Libraries are all about the idea of recognizing the importance of personalization and inclusive services. We seek to best understand and meet the needs of unique user experiences. It is inevitable that our communities will soon be joined by these AI partners with the capability to radically transform their lives, hopefully for the better. Let’s work to ensure this technology reduces barriers to access and helps us better connect with each other, enabling and empowering us to lead healthier, happier lives.

One thought on “HHH: Persistent Recognition Systems

  1. I am glad I am older, because I do not want to live in a world where a computer somewhere is unmasking my personal emotions or adjusting my environment simply because my eye lingered too long.

    Imagine that student sitting across from a counselor: counselor unmasks student’s true feelings and her own feelings shift in response, and then the student (using his own tech) unmasks her feelings and adjusts his… the two could sit there staring at each other while computers recorded the ensuing game of emotional chess.

    And of course we’ve already seen the result of Youtube’s algorithmic affect on political echo chambers. Not sure we need further AI help in the creation of personal experience bubbles.

    Some of these technologies, like assistance technologies, could be developed as an opt-in. It’s the lack of opt-in in persistent recognition that makes me want to hop off the planet!

    Thanks for sharing! Going off grid now… ha!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.