HHH: A Trio of Terrifying Tech

Logo for Henry's Hightech Highlights

Hi there, Henry here! Last year, on this hair-raising holiday, I highlighted a double dose of dreadful technologies to scare the smart pants off you. This time around, I’m going to escalate the eeriness up to eleven by trick-or-treating you to a triple threat of alarming innovations. Today’s highlights are:

  1. Smart Dust
  2. WiFi Recognition
  3. Digital Clone
Animation of an old timey black & white movie with grim reaper type looming over text coming towards screen that reads: "PREPARE TO BE SCARED!"

1. Smart Dust, what’s that?

Black and white photograph of a hand throwing up dust.

It sounds like the stuff of science fiction, but it’s real and just around the corner, soon to be billowing our way. Computers can be made the size of a grain of dust and light enough to float in the air. These clouds of smart dust can monitor the environment, gather data, and even take photographs.

The microelectromechanical (that’s a mouthful) systems (MEMS) may end up being self-powered too, harvesting energy using passive WiFi and the heat from our bodies

Animation of a man mouthing the word "WOW!" as a starburst is superimposed over his head as if to say "MIND BLOWN."
Mind blown.

Here are more powers the potent particles may possess:

  • Detect light, vibrations, acceleration, stress, pressure, humidity, sound
  • Help with energy efficiency and environmental comfort in buildings
  • Measure air quality
  • Monitor crops and status of equipment
  • Assist with health and medicine: e.g. doctors can diagnose without surgery; inhale smart dust instead of having an endoscope inserted

Of course, the dark side of the dust is that these undetectable particles could be used to track us. It’s not hard to imagine them being used for security, wirelessly monitoring people and products.

One day soon we may be seeing signs up that warn us:

“SMILE, the DUST is WATCHING.”


2. WiFi Recognition, what’s that?

Cartoon of Superman having X-ray vision to see Lois Lane through a wall.

WiFi recognition is when WiFi and radio waves can track our physical movements and emotional states. Transmitters send out signals, and as we move through the signals, a device can see the signals bouncing off us and onto other objects. This allows the device to effectively see through the wall and track our movements. And the device can even tell we’re feeling freaked out by that. More on its empathic powers in a bit.

Here’s something appropriate for the season: MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has invented a way to recreate our skeleton via WiFi. Spooky!

Goofy dancing skeleton illustration.
Perfect for Halloween

It actually looks like this:

There are myriad uses for WiFi recognition. If health metrics are collected by our router, imagine it communicating with our smart home and automatically adjusting the appliances accordingly so we lead healthier lives. Or imagine a baby monitor that also collects the infant’s vital signs. It could also know we’re snoring and what state of sleep we’re in, thereby helping us with our sleep habits. Thanks for the health hacks, all-knowing WiFi!

Another MIT CSAIL creation, EQ-Radio, can detect emotions by collecting breath and heart rhythms from WiFi signals.

Maybe our emotionally intelligent WiFi routers will see how angry we get when they stop working and begin to apologize for once.

Image of a man getting mad at his WiFi router

3. Digital Clone, what’s that?

Photo of a man seeing his reflection in glass only it's facing the wrong way.

Generally we think of digital clones as the ones used in special effects in TV and movies: a fake, computer-generated version of, say, Tom Cruise, is made to appear hanging onto the side of a plane. Wait, that’s a bad example since I think Tom Cruise does a lot of his own stunts. Anyway, there’s another kind of digital cloning that doesn’t need a body to be represented. A personal chatbot or mebot is a deployable AI version of you. They learn from you and then represent you online. Conduct an interview, ask them anything, and your digital clone will behave just like you. Your friends can ask it questions if they don’t want to bother you or might be embarrassed by the line of inquiry. Now you no longer need to reach out directly to a person for certain pieces of information. For example, you could ask a clone:

  • Do you have any food allergies or special dietary needs?
  • What was your famous chocolate cookie recipe again?
  • How are you feeling today? (If health metrics are involved and your partner is the clone, you could determine, without bothering the real person, what their current stress level is and see if they are up to visiting the in-laws.)
  • What did you eat for breakfast this morning? Or where did you buy that dress? Works great for celebrities who can’t possibly answer every random question from fans.

Library Role:

All three of these technologies are likely to cause a shiver to run down our spines. That’s okay. It’s good to be scared. It means we’re aware. It’s our job as library staff to heroically face down the horrors and negative side effects of potentially problematic emerging technologies on the behalf of our communities. When that tech with ethically questionable capabilities appears, look it right in the eye and seek to better understand the monster lurking inside. Remember: knowledge is power. And libraries can make a difference ensuring that our students and patrons are well-equipped and informed to make the right choices, have protected free speech, right to privacy – all that good stuff. We can’t do that if we put our heads in the sand.

So keep your heads up high, heroes!

And HAPPY HALLOWEEN, everyone!


P.S. My recent webinar on Virtual Reality and Augmented Reality and libraries is now available for your viewing pleasure! Just click below to be taken to the shortcourse page that contains the archived recording link. Remember to log in and enroll if you want to receive 1 CE credit.

HHH: Smart Garb

Logo for Henry's Hightech Highlights

Usually when we think about wearables, items such as watches, clips, and bracelets come to mind. They’re special accessories that we put on – more like gadgets, really. But as time moves forward, the idea of a separate device to carry around will be less and less appealing to us. Instead, we’ll begin to prefer new technologies that directly and seamlessly integrate into the everyday clothing we already wear. 

Today’s highlight: Smart Garb


The smartphone will go away one day. And to that I say, good riddance. I look forward to the time when I’ll see the looks of disgust and disbelief on the faces of my grandchildren as I reminisce about the old days :

“Back in my day, we used to carry this little mini – pocket computer device everywhere we went, and we would keep our eyes glued straight down at its tiny, barely even visible screen clutched in our hands instead of looking out at the world and each other.”

– Grandpa Henry

As Future Today Institute predicts, “We will transition from just one phone that we carry to a suite of next-gen communication devices, which we will wear and command using our voice, gesture and touch.”

Forget about all these devices and gadgets. Let’s just use what we already have – what we already wear.

Smart clothing isn’t just coming soon; it’s here! Allow me to take stock of what smart garb is already out there. You may be adding something like the following to your wardrobe now or in the near future:


> Electronic textiles or ‘e-textiles’ to interact with your music, GPS, and phone

Levi’s Commuter x Jacuard is a smart jacket that is fashioned out of a specially designed denim sewn with conductive thread that only needs to be touched to connect to your smartphone.

Animation showing a cyclist wearing a jacket and swiping at his sleeve to dismiss an incoming phone call from his boss.

> Yoga pants to improve your ‘downward dog’

These pants are paired with an app to help you practice yoga better. It can detect your physical movements and compare your data to baselines. It’ll “nudge” you to walking, sitting, and downward-dogging better. Curate your own personal yoga class! (Examples: Nadi X, Pivot Yoga)

Animation of a yoga practitioner posing next to a smartphone displaying an app that is measuring her body position.
credit: Nadi X

> Pajamas to recover from workouts

Had a hard workout? Put these smart PJs on and help your muscles recover faster using infrared energy.


> Swimsuits to prevent skin cancer

When the UV levels are high, the suit notices and alerts you to apply more sunscreen. Don’t trust your kids – or those sun-worshiping grown-ups whose skin you care about – to be careful about the Sun? The suits’ sensors can even be remotely monitored by the more responsible party (i.e. you).


> Socks to prevent diabetes

Sensors in these socks detect levels of glucose or lactate in sweat and can even alert those with diabetes of the presence of foot ulcers, which can often result in amputations if not caught in time.

Photograph of smart socks
credit: Siren

> Belts for epilepsy and to help you diet

For epilepsy patients, there’s a belt that keeps tabs on your respiration rate and sweat build-up with an automatic alert to your loved one if something goes awry. There are also belts that use sensors and measurements to help give you feedback in your dieting efforts: tracking waist size, food intake, how much you’re sitting, etc – and there is even one that helps you with the opposite: they automatically loosen when you’ve overeaten so you don’t have to cease your feastin’.

Photograph of Smart Belt next to smartphone showing weight loss app.

> Gloves to help you communicate

In Kenya, a researcher invented smart gloves called Sign-IO that can translate sign language into speech using gesture recognition.

Photograph of inventor of smart gloves that translate American Sign Language.
Source: Pulse Kenya

> Shoes to help with health, sports, visual impairment, falls, and personalized fashion

Walk a mile in these shoes… and the shoes will know you traveled a mile, plus a whole bunch of other things…

  • Health:
    • Many smart shoes can measure your gait, calorie burn pace, distance, steps, stride, cadence – all helpful in regard to your fitness and health improvement efforts. There are safety shoes that monitor the posture of construction workers and prevent them from spending too much time at work crouching, kneeling, and on tip-toes. This will prevent fatigue and common injuries, plus lower back pain and sciatica. For those patients needing post-surgical evaluation, the shoes collect valuable data for their doctors to speed up recovery. Diseases can potentially be diagnosed as well.
  • Sports:
    • Improve your athletic performance with your shoes’ assistance. Even your golf swing can be perfected
  • Visual impairment:
  • Aging population:
    • If the wearer falls, an alarm is triggered and the location is sent to a family member or friend. (E-vone)
Photograph showing E-inking on a shoe.

And here’s a list of other “shoe”-per powers I’ve seen out there:

  • Self-lacing, self-tightening
  • Color changing / E-inking
  • Heat up on cold days
  • Audible real-time coaching feedback as you run
  • Altitude measurement

> Helmets to send for help in accidents

A number of helmets now have built in brake lights and turn signals. If the cyclist is struck or falls, the emergency contact is instantly alerted. Some use bone conduction speakers to help you safely play your music and make phone calls.

Animation of a cyclist falling down and his helmet sending an SOS.
LIVALL BH81H Smart Cycling Helmet with HRM

> Nail art to prevent skin cancer and premature aging

Like the aforementioned smart swimsuit, this adhesive-backed decorative nail art by L’Oreal contains sensors to measure the amount of UVA and UVB rays you’re being exposed to and presents the data to you via a smartphone app.

Photograph of the smart nail art on a thumbnail of a woman's hand.

> What about libraries?

Here is some food-for-thought on how smart garb intersects with libraries. Let me know if you have further ideas!


Graphic of a lightbulb

Inclusive & Accessible Services

As described above, there are already numerous ways that smart garb can assist people with disabilities, epilepsy, diabetes, visual and speech impairments, risk of falling, recovering from surgery, etc., etc. – and more no doubt to come in the future. The service population benefiting from this technology is a large, important part of society – and as ALA writes, “libraries should be fully inclusive of all members of their community and strive to break down barriers to access.” Staff should stay up-to-date and pro-active in supporting how smart garb supports inclusion and accessibility.


Graphic of a lightbulb

New Interfaces

If smart garb becomes more complex and widely adopted, and if it replaces smart phones in the future, there will need to be an entirely new haptic language (using sense of touch and motion, rather than speech, text, or touchscreens) so that we can communicate with the systems and interfaces we use in our day-to-day lives.

This would inevitably result in new ways to use library services. ALA Center for the Future sagely points out that “patrons may increasingly expect that their library experiences – search, navigation of the library space and stacks, or even reading time – would integrate wearables and the haptic feedback that they provide.”

Libraries will need to learn to speak ‘garblish’ (I’m coining the term; you heard it here first!) and help empower and enable those patrons who may be left behind as everything shifts to the new paradigm.


Graphic of a lightbulb

Digital Divide

If smart clothing becomes as ubiquitous as the smart phone, how can we ensure that this new technology is made accessible to all strata of society so that all have the opportunity to succeed in the future? Providing access and “wearable literacy” may become part of the library’s mission. Some community members may not be able to afford certain articles of smart garb or may only need them during a particular time frame. Would libraries start adding clothing and accessories to their collections for circulation?


Graphic of a lightbulb

Privacy Issues

Libraries should advocate for, educate about, and protect people’s privacy – and smart garb certainly poses significant risks in that regard. Security is paramount in ensuring personal biometric and location data isn’t abused by third parties. We don’t want to air our dirty smart-linen in public, so to speak. A recent Wired article outlined how Fitbit heart-rate data of a murder victim was admitted as evidence to try and convict a suspect. It contained this quote: While you have the right to remain silent, your gadgets mostly do not.”


HHH: Persistent Recognition Systems

Logo for Henry's Hightech Highlights

You should be aware of a new technology that is increasingly aware of you.

Today’s highlight… Persistent Recognition Systems


What is it?

All around us are smart devices that monitor us in real-time, all the time. There are our cameras, tablets, and the smart phones in our pockets, of course. But there are also home speakers like Amazon’s Alexa and Google’s Home, plus doorbell cameras and security systems, cars, refrigerators, watches and other wearables – all possessing persistent recognition. These systems are getting installed in our day-to-day spaces at a constant pace. Analysts expect 75% of U.S. households will have smart speakers by 2025.

It’s not just the monitoring. Combined with Artificial Intelligence (AI), these devices can know personal things about us. Some can even tell whether we’re sick or angry. Your accent can help them determine which country you’re originally from. They can take in background noise and make deductions like super-powered private detectives. All of this serves the function of targeting consumers for marketing purposes. Alexa hears crying in the background, for example, and soon Amazon.com starts suggesting baby products for you to buy.

Animated cartoon of a smart speaker with human ears and menacing eyes, swiveling back and forth listening to what everyone is saying.
Illustration: Erik Blad for The Intercept

What’s special here is that we are persistently being recorded and our data recognized. These devices are always on, always listening; the data that’s collected is uploaded and stored in the Cloud. Our data is being mined in our homes and offices. And in the near future, it won’t just be the external world keeping its ears open to our data, but the inner world as well (literally inside our ears in some cases). We will have our internal states recorded and analyzed via sensors in hearables, injectables, etc.

What is it good for?

It’s easy to see why one would be concerned about persistent recognition systems for their threat to privacy and their potential use by data-controlling authoritarian governments who could bring about surveillance states. It’s natural to be wary.

But what about the amazing good this technology could bring? One of my science heroes is Poppy Crum, PhD Neuroscientist and Technologist, Chief Scientist at Dolby Laboratories at Stanford. I’ve had the privilege of seeing her speak a couple of times at the SXSW Interactive Festival, and in the last talk I saw, she did point out that, indeed, technology will know more about us than we know about ourselves. But she argues this doesn’t have to be a bad thing!


“Increased tracking and ubiquitous sensing can improve care and quality of life and mean greater autonomy and freedom.” – Poppy Crum

Crum started her presentation by sharing this quote from 1943:

"Most of the greatest advances of modern technology have been instruments which extend the scope of our sense organs, our brains, and limbs. Such are telescopes and microscopes, wireless calculating machines, motor-cars, ships and airplanes." - K.J.W. Craik, 1943

What if there is a natural progression from telescopes to airplanes to persistent recognition systems? With this new technology, combined with AI, we can continue to extend our scope. Crum asks us to imagine all the many powerful and transformative benefits these systems can provide.

Here are a few of the ways we can extend the scope of our senses:

Emotion – Devices that pick up our emotions might, for example, automatically play us soothing music in the car to prevent road rage; they might know when we’re grieving and eliminate the ads we see. If they know we’re having positive feelings, they could suggest ways to prolong or recreate them later. Or help us better interact more effectively with others.

Breath – Devices that track our breath can help us improve our state of mind. If we know that we’re not breathing well, we can consciously control it and calm ourselves to reduce stress. This will also improve our heart rate, blood pressure and circulation. Having this tool in our arsenal will help us feel better and lead healthier lives.

Gaze – Devices which know where we’re looking will know what we’re interested in. And they might give us an extra way to interact with content. Say you want to turn the television on, for example. Just stare at the tv screen, and voila! – who needs the remote control? Want to pick which youtube video to watch? No need for the mouse click: just linger your gaze a bit longer on ‘cat playing piano’ and you’ll be taken straight to the hilarity.

Inner Ear – A lot of data can be mined from inside our ears to help understand our internal states. And a lot can be done with persistent recognition systems and AI in our ears to help create personalized experiences for us in the context of our lives. Via tracking, these devices will learn from the environment for us in order to provide a tailored experience at the point of need. If your hearable recognizes from your internal state that the noise at a party is overwhelming you, it can alter the volume within your ears to help you have a better experience – or, isolate only the voices from the people you are conversing with. Crum envisions a future where people will embrace this empowering hearable tech:

“Like fashion eyewear, hearing aids may become a style choice.”

– Poppy Crum

Subvocal – Check out this conceptual video from MIT for a device that picks up your subtle internal subvocal movements in order to communicate with a computer. This is great for protecting our privacy when we may not want to use voice recognition in public.

Spaces – Once these systems are embedded throughout our physical environment, they may be able to communicate with one another and adjust the space itself for individual users. Imagine assisted living places where a smart room with persistent recognition could help folks perform regular tasks they may find difficult, such as automatically lowering or raising the blinds. Imagine an elevator which takes the data from a person with physical impairments and automatically selects the right button for them. This tech has the capacity to eliminate barriers to access, creating an adaptive environment for everyone – especially people with disabilities.

Memory – A device that tracks and stores data and is capable of reminding you of things has the potential to aid a lot of people. Those who live with memory loss issues reportedly love Alexa-style services. They can keep asking for what the day is twenty times a day and still get the correct answer each time, without judgement.


Crum points out we can see these devices as extended partners, not assistants. The concept of one-size-fits-all technology, which is what we have now and have always had, will be a thing of the past. The next generation will wonder in amazement how we all had to use the same cookie-cutter tech.

Crum sees the future as the ‘era of the empath‘. If our technology can know how we’re feeling – measure our eye dilation, heat signatures, the amount of carbon dioxide in our breath – and determine from that data whether we’re lying, in love, feeling lousy, etc, then it means “we can bridge the emotional divide.” Crum predicts it’s the end of the poker face. “We get a chance to reach in and connect to the experience and sentiments that are fundamental to us as humans in our senses, emotionally and socially.” Examples she gives are a high school counselor being able to know whether a seemingly cheery student is actually having a hard time, or an artist able to find out exactly how her work effects people emotionally.

Another exciting possibility is in the field of healthcare and the potential of these systems to diagnose diseases. This technology can differentiate coughs and sneezes from other background noises so could discern if we’re ill and suggest solutions. If our speech patterns and body movements are being collected through persistent recognition, AI might use that data to determine whether we are developing early signs of diabetes, multiple sclerosis, bipolar disorder, or Parkinson’s, and warn us. Using real-world labeled 911 audio of cardiac arrests, researchers trained the AI in smart devices to accurately classify agonal breathing instances – an early warning sign of cardiac arrests. Thus, the AI on the other end of the 911 call (or even in the smart speaker in your house) might know you are having a heart attack before a human dispatcher does.

How do libraries fit in?

Libraries are well-positioned to educate their communities about this emerging technology. Some libraries have partnered with voice assistants to create apps within them for their patrons specifically for using the library’s services. Others are teaching classes on how to set them up, or publishing helpful FAQs.

Library staff should be aware of the potential of persistent recognition systems, both good and bad. There are the very serious privacy and security concerns to keep up with and inform patrons about. And there are the beneficial uses that Crum envisions. Libraries are all about the idea of recognizing the importance of personalization and inclusive services. We seek to best understand and meet the needs of unique user experiences. It is inevitable that our communities will soon be joined by these AI partners with the capability to radically transform their lives, hopefully for the better. Let’s work to ensure this technology reduces barriers to access and helps us better connect with each other, enabling and empowering us to lead healthier, happier lives.

HHH: Neural Interfaces

Logo for Henry's Hightech Highlights

Say goodbye to the keyboard; type ‘hello’ instead with your mind. And that’s just the beginning.

Today’s highlight: Neural Interfaces

What is it?

A neural interface (also called Brain-Computer Interface or BCI) is an external device to detect your neural system signals. The “human neural system generates, transmits, and processes electrochemical signals in different parts of the body. The ‘electric part’ of those signals can be ‘read’ and ‘interpreted’.” (Kaspersky) With machine learning, the device’s interpretation can be turned into an action, such as typing or selecting an item on a screen, turning on the lights in your house, or moving a car or robotic arm. Hence, you can interact with something using only your thoughts. Despite what it might look like, it’s not the same thing as reading your mind. But that doesn’t mean it’s not amazing.

Animated gif of woman wearing BrainCo Focus 1 headband controlling lights and robotic hand with mind.
Turn on the lights or move robotic hand with wearable. From ad for BrainCo Focus 1

In the past, this technology had to be implanted directly into the brain during a neurosurgery. Now there are non-invasive ways to detect these brain waves, simply by wearing a headset, headband or even a wristband. Soon we will be seeing this mind-blowing technology out in the world. In fact, last week Elon Musk of Tesla fame revealed his plans with Neuralink and brain-reading threads in a white paper. Defense Advanced Research Projects Agency (DARPA) is also spending $65 million to fund neural interfaces, Bryan Johnson has raised $100 million for his Kernel company, and Facebook is working on BCI as well. It’s only a matter of time before brain waves will become humanity’s next everyday tool.

Animated gif of man controlling a model car with Neurable BCI.
Ramses Alcaide, CEO of Neurable, demonstrates his brain–control interface on a model car; the technology has also been tested on full-sized automobiles. Image by University of Michigan/YouTube

Neural interfaces give the end-user additional autonomy. One great example is its potential to help us interact with the world and communicate more efficiently by cutting out the need for us to type with our hands. The world we live in now requires us to be typing a lot – not just when texting, but in entering information such as website addresses, our usernames and passwords, or locations in our GPS navigation apps, etc., etc. One of the most obvious uses for neural interfaces is to allow for hands-free typing. Voice recognition tech has advanced at an astounding rate to allow us to speak and have it be accurately translated into text, but for people who are nonverbal, have hearing loss, or simply don’t want to be heard speaking their private correspondence out loud, voice is not a cure-all option. Neural interfaces are a great solution that will allow us to type with our minds while our hands, if we have use of them, can be occupied with other tasks.

Animated gif of man using BCI in front of screen to select letters to type a word.

Here are a few other use cases for neural interfaces:

> Neuroprosthetics:

There’s a lot of potential for people with disabilities. Those suffering from strokes and paralysis will have new options. In 2018, researchers at MIT debuted functioning smart limbs that are controlled by the brain.

This technology will allow us to effectively become cyborgs and develop super-human abilities. Want to fly like a bird? We’ll develop functioning bionic wings. There are ways all of us could benefit from a device that is constantly detecting our brain waves and interpreting them. What if we could be alerted right away to the fact that something feels wrong to us? It would be like an augmented intuition. That’s just the tip of the iceberg – there are many more uses we can brainstorm.


> In education:

Image of Chinese schoolchildren have their attentions tested with Brainco headbands.
Chinese students use BrainCo headbands in class

In a classroom setting, BCI could be used to test student engagement. Headbands that monitor concentration by reading brain signals have been tested on 10,000 Chinese schoolchildren. Lights on the device flag if the student is not paying attention. Say goodbye to daydreaming in class!

Neural interfaces could also provide tailored experiences for different learning levels. Imagine a digital learning environment that detects the student’s workload via brain waves (for example, how hard it is for them to solve an arithmetic exercise) and automatically adapts the difficulty of the exercises to hold the learner’s workload level in an optimal range. (See this research for more info.)


> In AR and VR:

Companies like Neurable, Emotiv, and NeuroSky are creating AR/VR headsets that have non-invasive BCI hardware built in. You can play a video game and never use your hands.

Animated gif of man wearing Neurable headset selecting items in a VR space using BCI.

In an Augmented Reality apace, this will prove particularly useful when needing to interact with any holographic interfaces. Imagine walking down the street wearing your AR glasses and you pull up a display where information must be entered in to a form. Gesturing with your fingers to type on a holographic keyboard could work, but wouldn’t it be so much better (and far less silly-looking to passers-by) just to think of the letters you want inputted?


> CTRL-Labs’ wristband is a really cool alternative to a headset:

Animated gif of man wearing CTRL-labs wristband to have computer generated hand on screen mirror his hand gestures exactly using only BCI.

No need for a glove. By only using a single band of sensors around the forearm, this wristband knows what the hand and fingers want to do. It translates what it knows visually on the monitor.

Animated gif of man wearing CTRL-labs wristband and having his hand closed by another man. He thinks of what what he wants his hand to do and the computer-generated hand still does it.

Even when the hand is constrained – not moving at all – the wearer imagines what he wants to do with his hand and the wristband can pick up his intention from the electrical pulses along the neurons in his arm. So even those without hands may still be able to use this technology. My mind is blown.

Animated gif of man's face looking amazed and having his mind blown.

> You can now get a drone that comes with a headset to control it:

The idea that you can wear a headband and use the mind to have a flying drone accompany you is the stuff of science fiction fantasy. My 7-year old self is high-fiving me right now.


> What about libraries?

Graphic of a lightbulb

New programming, new classes: BCI is a skill, requiring the user to be properly trained to achieve control over the neural interface. If he/she cannot perform the desired mental commands, no signal processing algorithm can identify them. To use this new tech, it may be that one will have to learn how to focus on a specific task. Mindfulness practices will become even more important than ever. Libraries may become a place for patrons to take a mindfulness or concentration class in order to learn how to use this new technology.

Graphic of a lightbulb

Patron privacy: If these devices attain mass consumer adoption as they are poised to do, and companies then get access to millions of users’ brain states and collect data about them for their own purposes, there are enormous privacy and ethical issues at stake. With authoritarian governments controlling the data or mandating citizen use (see schoolchildren example above), it could help spur the creation of a surveillance state. Libraries are already helping to keep their communities informed about privacy awareness and may have to increase their efforts.

Graphic of a lightbulb

What do you think? How else do you think libraries will be involved with neural interfaces? Let me know your thoughts! I want to detect those brain waves. 🙂

HHH: 5G

Logo for Henry's Hightech Highlights

We can talk about virtual reality, self-driving cars, AI, and robots all day – but the truth of the matter is that none of these amazing technologies will work in the world as we all hope and envision unless we have one thing first. There’s a deceptively tiny word for this thing – two-characters: one number followed by one letter. But don’t be fooled: it’s HUGE.
Today’s highlight: 5G


What is it?

Short for Fifth Generation, 5G is the next wave of wireless network technology. 1G gave us the cell phone, 2G gave us the capability to send texts, 3G gave us mobile web, and 4G LTE made everything about 10 times faster. 5G hasn’t quite arrived on the scene just yet, but it’s supposed to be right around the corner. Your phone will be able to get 10 gigabits per second, which is 600 times faster than the typical 4G speeds on today’s mobile devices, and 10 times faster than Google Fiber’s standard home broadband service. By being this fast, it means we’ll be able to communicate in almost real-time, with 1 millisecond of lag.

But it’s not just our phones. The future, as envisioned in the concept of the ‘Internet of Things’, is going to be fully computerized and data-driven in all the devices and appliances within our environment – our thermostats, our cars, our streets, our cities. We’ll be going from 300 million connected devices to 3 billion, and many will have imbued intelligence with integrated voice control and ambient interfaces with the ability to personalize an experience for each unique user. Everything will talk to everything else, sending data back and forth. Even the trees will likely have sensors that communicate via a network. Right now, the only way we can reach the necessary speeds to bring about this new interconnected future is through a wired connection. But that won’t work, of course. Wires and cables can’t link everything up. The solution to all this is 5G wireless. Once 5G arrives, the science fiction future we imagine will be possible.

Without 5G, we will never see:

  • Smart cities
  • Self-driving cars
  • VR and AR on mobile devices
  • Remote surgeries via robots

5G is so exciting that in recent surveys, industrial companies rank it above Artificial Intelligence (AI) as an enabler of digital transformation. Its importance has caused the equivalent of a space race to begin. If a country gets to 5G first, Wired Magazine points out, “its burgeoning tech industry will create the next global mobile platform.” China having 5G would give them an edge in other important industries, too, such as AI. You’ve heard the expression: “The one who has the gold, has the power”? The reality is, “The one who has the data, has the power.” And 5G means more devices, and more devices on a network means more data, and since AI needs data for its training, more data means better AI. It all comes back to 5G. Potentially, it could also unify all services (wireless, wireline, and satellite) under a common digital structure.

Where are we now? And what does it take to get there?

Right now, the most privileged of us have 4G – which uses spread-out cell towers that can broadcast at great distances. It’s not capable of the speeds we need to reach though. To get 5G we will want to use “millimeter waves”, the very high end of the wireless spectrum, where there’s plenty of unused bandwidth. The problem is this technology is not good at long distances where there’s disruptions like trees, people, and even rain. To make it work, it requires a huge amount of access points, or base stations, rather than a few big cell towers. And those access points are connected to a wired network infrastructure, a fiber one. So despite calling 5G a wireless solution, it’s powered by fiber in the ground with a tiny cordless last mile. No fiber? No 5G.

Diagram that shows underground fiber cabling enabling 5G wireless with small cell technology.
5G requires frequent base stations supported by extensive underground fiber network

Here area few things to be aware of as we enter the Age of 5G:

> It will take longer to arrive than we think.

Although the year 2020 was thrown around for a while as the year 5G would make its appearance in our lives, it needs more time than that. Even now, Verizon and AT&T have launched what they’re calling 5G in some cities, but this may be deceptive as they’re still technically using 4G technologies. PC Mag writes, “AT&T has started to call its 4G network ‘5G Evolution,’ because it sees improving 4G as a major step to 5G. It’s right, of course. But the phrasing is designed to confuse less-informed consumers into thinking 5G Evolution is 5G, when it isn’t.” For AT&T, the 5G speeds will be capped at 2 Gbps. Very fast, but not quite the 10 Gigabits per second 5G is supposed to provide.

To make 5G happen it’s going to take a lot of investment and a massive deployment of hardware, which is all very time-consuming. 5G requires much smaller cell stations every few blocks in order to bring coverage, rather than the cell towers that 4G uses that could deploy signals for miles. It requires local approval, and there are often huge regulatory fights. Instead of thinking it’s going to launch next year, we should re-frame it as an investment for the next decade. Despite how fast it will be, 5G is going to be slow on arrival.

> It may widen the digital divide even further.

Can we really expect 5G to come to the rural and underprivileged areas any time soon? As Wired writes, “less oversight and fewer carriers could translate into higher prices and less availability for 5G… [and] without oversight, carriers might opt not to build 5G networks in low income or rural areas that could prove less profitable.”

As I mentioned before, the great wireless revolution requires a fiber backbone in the ground to run. Rural areas often don’t have the fiber that’s required.

Another frustrating possibility is that efforts to bring about 5G could actually reduce rural coverage even more than is already present:

“[To move to 5G] Verizon has recently discontinued activating 3G handsets and has been decommissioning 3G equipment, but not always replacing coverage in those areas, which leads to many people in the marginal coverage areas having no access at all.”

– Deborah Simpier for BroadbandBreakfast.com

The step to 5G could actually be skipping a step, since it’s not like the existing 4G is meeting the current needs across the country. It’s hard not to agree with Christopher Elliott who wrote in Forbes, “It would be nice if the providers could provide rural areas with consistent 4G service first.”

> It may be bad for your health.

Many have pointed out the possible health risks of having so many 5G cell stations and their electromagnetic radiation near people’s homes. Although they are often referred to as “small” and only the size of a pizza box, the technology to justify this terminology isn’t quite there yet. We can expect more of a refrigerator size for the time being. The aesthetics, psychological effects, and just general property value for the people living among these rows and rows of small fridges attached to practically every rooftop, utility pole and lamp post may have negative consequences on human and animal lives.

> It could throw off weather forecasting.

5G is on the same wavelength as weather forecasting so those using it would be competing with these services. This interference could decrease accuracy by 30% – setting us back four decades to the lower quality forecasts we had in the 1980s. It means you may not hear about the hurricane coming to your area in time. It’s a major concern recently brought up by NASA, NOAA, and others, and needs to be addressed.

> It could further threaten our personal data and privacy.

If 5G is what brings about the Internet of Things – with massive communication happening on a constant basis with our personal devices and the environment, then it could herald the beginning of intrusive digital advertising that occurs not just on the computer browsers within our homes, but out in the world as we’re moving about. Without oversight, marketers using this technology may not respect consumers’ privacy. Also, as Fast Company pointed out, with 5G’s smaller coverage areas, “anyone with access to your ISP’s cell tower data will be able to hone in on your exact location far more precisely than they can today under our 4G networks.” You will no longer connect to a distant 4G tower a mile way, but a 5G one right near you, and as you move around, you will quickly connect to the next one closest to you. Your whole path can then easily be tracked and your location accurately determined at any moment simply by knowing which 5G tower you’re next to at any given moment – even down to the building you’re in.

Graphic of a confused person

Final Note:

There are definitely issues with 5G to work out, but don’t let this list scare you. It’s best to be aware of what’s coming and face it head on. Don’t keep your head in the sand; stay vigilant, ask questions, and don’t fear the future. It’s our duty as librarians.


Remember that:

  • You personally don’t have to know everything
  • Everyone is learning
  • You just have to be open to finding the answers.

It’s the librarian way.

HHH: Telehealth

Logo for Henry's Hightech Highlights

Let’s talk about an emerging trend that promises to transform lives and become the next big thing. And libraries are poised to play a big part of it.

Today’s highlight: Telehealth

What is it?

When you cross telecommunications technology with healthcare, you get telehealth. And this combination reaps huge benefits. As a supplement to traditional medical care, people can receive healthcare quicker without having to travel to a physical location. This makes it more affordable and accessible, enabling many to receive higher quality medical attention to improve and save lives.

Stock photo of doctor consulting a patient via telehealth videoconferencing.

What is it good for?

  • Telehealth can address obstacles and burdens posed by geography, transportation, and other mobility issues.
  • By having more remote consultations, doctors are freed up to spend more time on patients who have more serious medical issues. Patients can avoid visiting the ER unnecessarily.
  • It reduces the risk of contagion. When flu outbreaks occur, for example, entire schools may not have to shut down to prevent the spread of the illness.
  • It also benefits the economy. The annual cost of the opioid crisis to the state of Texas is $20 billion (for treatment, rehab, criminal justice, foster care, social services, etc.), which is a huge percentage of the state’s GDP. Telehealth can help alleviate the costs considerably – so there’s economic incentive to leverage networks for telehealth purposes. Read more about how substance abuse recovery is supported by telehealth.

The idea of telehealth has been a longtime dream of forward thinkers. Back in the 1920s, people were envisioning what could be done using the newest magical technology of the time: radio.

1920s magazine issue of "Science and Invention", showing doctor remotely visiting patient who appears in a viewscreen
Science and Invention: “Diagnosis by Radio”
1920s magazine issue of "Radio News", showing patient remotely visiting doctor who appears in a viewscreen
Radio News: “The Radio Doctor – Maybe!”

In the far futures depicted in shows like the Jetsons or Star Trek, we predicted doctors would have the means to use wireless technology to scan their patients and help diagnose and treat them.

Screenshot of the Jetsons cartoon showing a doctor in a video screen examining Elroy's tongue; Screenshot of Dr. Crusher from Star Trek: The Next Generation examining her patient with wireless technology.

Today, telehealth is no longer the stuff of wish-fulfilling science fiction. With the rise and ubiquity of affordable mobile devices, two-way video-conferencing, sophisticated sensors, cloud-based services, and the high-speed Internet to power it all, we are seeing fantasy become reality. This futuristic tech is getting integrated into our everyday lives.

How do libraries fit in?

Libraries are the logical place to offer telehealth services (see Daily Yonder article, Jan. 2019 to read more). Libraries already direct patrons to authoritative health and wellness information (see my blog post to read more). What if they helped offer basic services as well?

Graphic of a lightbulb

Along with a WiFi hotspot, libraries could check out a telehealth device to patrons. This could be a digital otoscope to transmit vital signs to healthcare providers or wearables with sensors to help patrons monitor their health over time.

A photo of a digital otoscope and various health wearable devices.
Graphic of a lightbulb

Libraries could also act as a telehealth outpost. This could be a teleconferencing kiosk that patrons visit while at the library. At the Jackson County Public Library in Appalachia, Kentucky, a conference room has been converted into the nation’s first Virtual Living Room Telehealth Center to serve veterans.

Picture of a very low WiFi Symbol

There’s a problem with this new life-saving tech: telehealth runs on broadband. Internet speeds need to be very high for it to work. With many communities falling behind in connectivity while at the same time many populations, including seniors, realizing their need for telehealth, there seems to be a big disconnect. As soon as this technology hits mainstream popularity in the urban areas, rural communities are going to demand broadband even more than they already have so that they can gain access to these new services and the higher quality of life that goes with them.

Libraries are primed to be the go-to spot for telehealth as they are often the only place in communities to get free, fast Internet, and they touch everyone of all ages and economic levels.

Furthermore, telehealth requires patients to have a certain level of digital literacy. Brand new systems and interfaces are being developed to facilitate these services, and many will be left behind if they can’t figure out how to navigate them. Library staff are already poised and positioned as digital literacy experts to assist their communities in this new technology and to provide the foundational, basic skills training. Patrons can visit the library for help with everything from telehealth apps on their smartphones, to wearables, to “digital pills”.

A photo of a digital pill next to its app.
A “digital pill” with transmitting sensors comes with its own monitoring app

And the same technologies that allow sick patients to visit the virtual doctor can allow sick patients to attend school or work – via robot.

But that’s a Henry’s High-Tech Highlight for another time.

Photo of robot avatar in a school.
Baty, 15-years-old, goes to school via a robot avatar that has a camera controlled by computer due to having little or no immune system because of Polycystic Kidney Disease. CREDIT: Deanne Fitzmaurice (Photo by Deanne Fitzmaurice /Sports Illustrated/Getty Images) (Set Number: X85863 TK2 R1 F746 )

HHH: Video Games & Esports

Logo for Henry's High-Tech Highlights

Hello! Henry here. I’m happy to highlight a new high-tech hot topic. It’s historically been a hobby, but now it’s headlong become a hardcore habit, heavily hitting the right buttons on people’s hearts. Today I’m talking about…

Video Games & Esports


NEWSFLASH!

Two weeks ago, Google announced it was getting into the gaming business with its own platform called Stadia to come out this year:

Logo for Google Stadia

A week later, Apple announced it was launching its own gaming platform this year called Apple Arcade (along with other new services such as a credit card and streaming TV channel).

Logo for Apple Arcade

What’s so special about these two tech giants’ gaming platforms? Typically video games require special hardware called consoles, but Google and Apple are each promising their platforms will remove the need for players to buy separate consoles and will instead put games on what people already own – their phones, tablets, laptops, computers, and TVs.

Google Stadia will actually run through the Google Chrome browser, which will stream the games live over your broadband. It will be integrated both with Google’s voice assistant so you can get help from the AI during challenging parts of the game you’re playing, and with YouTube (owned by Google) so you can easily share out a live stream of your gameplay.

Meanwhile, Apple Arcade will be an app that will run on Apple devices. It will be like a Netflix of games – an all-you-can-play service via a single subscription.

Details are still slim about both platforms, especially Google’s – but we do know Stadia will require an Internet connection, which means no playing offline. As for what internet speeds you’ll need to support Stadia, Google recommends 25 megabits per second – the same speed that Netflix has suggested to watch their streaming content. Most of the libraries in Texas don’t even reach this 25 Mbps download requirement for all patrons sharing their network WiFi bandwidth.

So why should I care?

What impact will two of the biggest industry names in the world entering the gaming market have on our communities and our libraries? What does gaming have to do with libraries anyway?

A lot actually. Both games and libraries are more connected than you might think. As ALA’s Center for the Future of Libraries points out:

  • Both promote interest-driven learning and self-directed discovery.
  • Both help improve social skills. ALA writes, “Equally important, libraries as public gathering spaces can capitalize on the benefits of co-play, helping to improve players’ social skills by encouraging play together, in small groups, or large classes. The social setting of the library may also encourage users to be reflective in their play, building awareness, asking questions, and processing what is being learned through play.”
  • Both support digital literacy. Games help and encourage people to learn how systems (like interfaces and computers) work. These are crucial, next generation job skills, and libraries being in the business of assisting their communities with workforce development are wise to take notice.

For these reasons and many others, libraries – even here in Texas – have recently started offering esports programming.

Esports?

The Future Today Institute (FTI) publishes an annual report on emerging technology trends, and for the first time in 2019, they’ve included esports – competitive digital gaming with all the trappings of traditional sports. They write that, “advancements in both gaming technology and streaming capabilities have led to an astronomical rise in its popularity and perceived legitimacy in recent years.” And they predict it’s primed to continue as a major cultural phenomenon. According to a market report by Newzoo, global esports revenues have reached $906 million in 2018, a year-on-year growth of +38%. The ridiculously popular game Fortnite is a big reason for this. Viewership of esports tournaments may soon rival those for the NFL. FTI points out esports results in a more engaged audience because it’s so accessible – the skills needed to compete are more attainable than classic athletic sports, “closing the gulf between fans and competitors.

Screenshot from Simpsons episode featuring esports

What does the rise of esports mean for libraries?

I’m glad you asked! There’s a lot to explore here.

I’ve actually asked someone who successfully runs an esports program for their public library to conduct a free webinar for us on the topic of esports and libraries next month on April 25th . Hope you’ll join me! Here are the details and the link to register below:

Title: Get in the Game: Esports and Libraries

When: Thu, Apr 25, 2019, 2:00 PM – 3:30 PM CDT

Description:
Have you heard of esports but want to learn more? Ever wonder if esports could be featured in libraries? Interested in reaching and engaging more patrons through gaming and esports? Are you intrigued by a program offering which attracts a broad cross section of patrons of different ages, races, genders, and socioeconomic standing? Let’s take an in-depth look at esports and its community and discuss ways to build more games-related programming in libraries. Join our webinar with Tristan Wheeler (Outreach and Programming Services, Cleveland Public Library in Ohio) for an introduction to the Cleveland Public Library GAMING & ESPORTS event series. Discover how the world of libraries meets all things gaming and learn why a program like this is important to his library and could be for yours!

CE Credit: 1.5 hours

Register now for this free webinar from TSLAC!

Let’s play with the idea! See you April 25.

HHH: Artificial Intelligence

Logo for Henry's High-Tech Highlights

Hello, Henry here! Happy High-Tech Highlight Day. Today’s highlight is something big, something I’ll undoubtedly come back to again and again because it promises to radically change everything about our lives and the future of humanity in so many ways:

AI. Artificial Intelligence.


One way to think of AI is it’s like a child with humans as the parents. Our computers used to only follow a single set of explicit instructions each time they tried something. Now, with advancements in machine learning, AI can recognize patterns and infer things. And with humanity’s help in this process of gaining knowledge, it will soon be able to surpass its teachers and accomplish so much more.

But not like this…

AI won’t necessarily become our antagonists, but rather our partners – offering behind-the-scenes enhancements. With AI, we’ll become augmented humans, using our former student AI’s help to improve our own lives.


Things are happening pretty quickly on the AI front. Check out these two crazy-sounding recent advancements:

AI can sense people through walls. Which means we will eventually be like Superman.

AI can identify genetic disorders by simply looking at a face. Which means we will eventually be like psychics or fortune-tellers.

As libraries, we can do a lot to help prepare our communities for this huge change and what’s going on with our still-developing children, AI. Here are a few free resources and tools I recently learned about that you can access via your browser or a mobile app that help showcase machine learning.


Evolution – browser-based tool with an app as well: “Use joints, bones and muscles to build creatures that are only limited by your imagination. Watch how the combination of a neural network and a genetic algorithm can enable your creatures to “learn” and improve at their given tasks all on their own.


SeeingAI – Load this free app from Microsoft on your phone and point your camera to people and things. An AI will attempt to report what it sees. Give your own feedback on its accuracy and the AI learns!


Quick Draw – This browser-based game from Google gives you prompts to draw different things, but what you are actually doing is contributing to the AI’s pattern recognition skills. Over time, it will be better and better at associating a visual drawing with a particular concept.


You can then pair QuickDraw with another cool tool from Google: AutoDraw. Here you can actually start to draw something and the same AI that was honed through QuickDraw starts to suggest what you might be trying to depict. If you ever need an icon or quick representative graphic for something, feel free to use the AI’s guesses as your own.


Creative Help – Start typing a story in this browser-based tool, then see what the AI does to continue the story with another sentence. Could be useful in the creative process.


Microsoft Translator – Recently released by Microsoft for free, this app is loaded with multiple languages. You can have translated conversation without having to continually change the language input settings; the AI does it automatically. This is great in classrooms: Just share a code and have up to 100 students read what you say, as you say it, translated into whatever language the individual student has set it to.

HHH: Interactive Print

Logo for Henry's High-Tech Highlights

Hi there, Henry here! In my monthly column, “Henry’s High-Tech Highlights”, I share my thoughts on an emerging technology and its relevance to libraries.


Today’s highlight: Interactive Print


What is it?

Interactive print is anything that enhances the printed page with interactivity. Remember QR codes? That was one example of interactive print. Scanning a QR code can point you to another location, usually a website, which presents with you with additional content or information.

Technology has been steadily advancing well beyond the QR code and getting closer all the time to Harry Potter levels of magic.

Animated gif of Harry Potter scene
Harry Potter’s magical moving newspapers

How it’s being used:

In an earlier HHH post, I discussed Augmented Reality (AR). AR is the most promising way to make print more interactive. By pointing your phone’s camera at the page usually after downloading a special app, or by looking through AR-enabled glasses (coming soon!), one can see an overlay of content and information which can be interacted with. The page seems to come to life.

Example of AR showing 3D building when pointed at blueprint
Example of AR showing 3D building when pointed at blueprint

AR incorporates the digital world into the analog one. But what if you stick with analog and still make your print interactive? The marketing industry has been pioneering this ‘old school’ approach recently, and use of interactive print in advertisements is growing rapidly.

Examples of innovation in this area:

  • Magazine ads that change color when you push a button on the page:
Photo of Motorola magazine ad
Motorola ad
  • Inserts that collect solar power to charge your cell phone:
Nivea ad
  • A car ad that plays sounds, emits smells, and checks your heart rate while simulating a race :
Photo of interactive print ad
Toyota ad for 2018 Camry

Check out more examples of innovative interactive print ads.

So why should I care?

Whenever there’s a new technology, it’s helpful to ask the question, ‘What problem is it going to solve for you that you can’t solve today?’

People love print, and we in the library world know it’s not ever going away. I thought the following video said it well:

“What’s more interactive than touch? The feel of texture. The direct accessibility and immediacy, the three-dimensionality of something you hold in your hands. That’s print. You touch it, it touches you. It’s the medium that invented interactivity.”

Print is here to stay, but we have also grown to love the functionality that comes from using the digital environment, enabling us to increase our abilities and enhance our lives. This is partly why we invented ebooks and readers, so we could incorporate the digital world into our reading experience.

But what if you could bring the same added enhancements and features of ebooks to the printed page? Have your physical book, and interact with it too?

This is just around the corner. And as repositories of both the printed and digital word, it’s the library’s business to keep up with these advancements.

How do you see it coming?


Future use ideas:

Lighbulb icon

Imagine: A patron plugs headphones into a magazine and can access another layer of accompanying audio content. Check out a demo of this concept:

Lighbulb icon

Imagine: A patron places printed work on a smart desk, which enables her to accomplish much of what she can do with an ebook. Check out some demos of this concept:

Lighbulb icon

What else comes to mind for you when thinking about interactive print? Send your ideas to Henry at hstokes@tsl.texas.gov and I’ll share them during future Henry’s High-Tech Highlights!

Animated gif of Severus Snape reading a newspaper.
Until next time!