Wednesday, January 22, 2014

Google Glass is the API to control humans (and it's not as bad as it sounds)

With Google's acquisition of Nest, it occurs to me that Google Glass may have more far-reaching ambitions for human society than most of us can imagine right now.  And it has less to do with serving the needs of the person wearing the Glass, than controlling that person to do things that the Glass is telling the person to do.  So cynics can rest comfortable with their critiques of whether or not Glass is useful -- it may not be, to them.  But that's okay.  In other words, Glass may be the beginning of the API -- application programming interface -- to control humans.

Hold on, wait, it's not as bad as it sounds!  Really!  What I mean is augmented intelligence, not mindless drone control.  Okay, maybe it's a little creepy.

A few years ago I was lucky enough to be in Cambridge, Mass. to meet Sep Kamvar, a former Googler and now a Professor at the MIT Media Lab.  Amongst other fascinating things he described (e.g. he felt iGoogle had made some missteps with respect to the curation of its early developer community members), he posited the possibility of programming languages to interact with real-world, meat-space problems, not just objects and methods in digital space.

For example (my example, not his) just imagine if Andy Samberg and Justin Timberlake could write Javascript to do the following, not just in virtual space, but in real world:

box.cut(hole)
box.put(junk)
box.open

APIs to interface with the real world could be thought of as, maybe, robotics.  But think of where Amazon has gone with Mechanical Turk -- Human Intelligence Tasks where you can describe a task for a human to perform, and interface with a computer system.  As Amazon describes this, it is Artificial Artificial Intelligence.  I like a new platform called TaskRabbit, where you can basically find help to do stuff in real life -- household errands, skilled tasks, etc.  TaskRabbit can basically match up extra labor supply with sporadic demand.

So what if the end-game for Google Glass isn't just an interface for the wearer to be the user?  What if, instead, it's the best way to seamless strap a control mechanism onto another human?

Creepy right?  Well, consider the use cases in health care.  What if you have someone who's a relatively skilled and proficient with their hands, but only a novice with respect to healthcare.  What if you could assign tasks and walk that person through how to do anything from ask the appropriate questions and look for the appropriate signs (physical manifestations) to diagnose malaria?  Or to administer the right treatment?  Or to perform a surgical or invasive procedure?  What would it mean to global society if suddenly we didn't have to take a whole decade or more to train a doctor, but could mobilize masses of lower-skilled workers and put Google Glass on them to enhance their skill set and knowledge set, augmented by an instruction set designed by the world's best doctors and engineers?

In the same way that you can imagine Cyrano de Bergerac whispering and coaching Christian beneath Roxane's balcony as to what to say, what if someone -- or something -- on the other side of Google Glass could enhance the wearer's abilities beyond what was ordinarily humanly possible in a world where they'd otherwise need to take the time in school to learn and train on whatever it is they were supposed to do?

If you've ever watched the TV show, Chuck, or even better, a 1969 Disney live-action classic called "The Computer Who Wore Tennis Shoes", you'll recognize the similarities here.  In those two examples, a human being was magically gifted with ungodly amounts of knowledge that they could recall instantly.  In the TV show Chuck, Zach Levi's titular character was not only able to "flash" on knowledge, but eventually do Matrix-style instant-learning of physical and real-world skills -- say, martial arts fighting styles, flips, fencing skills or how to speak a new language.

And, just so we don't forget the implications for healthcare, check out 7:40 in this compilation of clips from Chuck, where Chuck "flashes" and instantly learns surgical skills to be able to safely extract a bullet from a spy-friend's leg.

Oh, and for those of you who somehow missed the last decade, here's the scene from the Matrix where Keanu Reeve's character, Neo, learns jiujitsu in an instant.  By the way, everyone remember's Neo's line "I know Kung Fu" from that scene.  The real important line slips away almost undetected in that scene, when Tank says: "Ten hours straight... he's a machine."  Neo has been boosted -- programmed -- to become super-human -- machine-like.

So I know we'll have our share of thoughts on how great Google Glass will be as a heads-up display for the wearer of Glass to, say, look up information, or get contextual alerts.  But I think the gang at Google X maybe has greater ambitions -- and that is, to have a mechanism to see what the user is seeing (the front-facing camera), where their attention might be fixed (the wink camera), what they're hearing (the speech-processing microphone) and finally a way to "control" or transmit instructions to the user (the prism display and the earphone/bone conduction audio setup).  Even in its most basic form, it can be a brilliant, anywhere-capable teleprompter, which anyone from the President of the United States to Anchorman Ron Burgundy would probably be able to acknowledge wields tremendous power.

What if, for example, you could pack at least one Google Glass on every airplane?  That in the rare case that an emergency arose, you didn't hear flight attendants asking just if there were medical professionals on the plane, but (especially if there were WiFi on the flight) donning a set of Google Glass and getting walked through -- either by A.I. or a human supervisor or both -- the interactions and actions necessary to safely see the patient to safety?

What if you could keep a Google Glass unit everywhere you had reason to staff people who knew CPR, or to keep a defibrillator around?  Sure, maybe people might know CPR or what to do, but what if you could instantly convert any able-bodied person into a CPR-capable helper?

Or what if you could give every school nurse a Google Glass unit?  They're already a healthcare professional -- now let the school nurse interface with a team of pediatricians ready to help them handle any more advanced issue, even if only to help triage it?  My daughter was in a kindergarten class with another girl who had Type 1 Diabetes.  Her parents were at wits end, having to constantly come in to accompany her on field trips, or in visit with the nurse and help guide the care of her insulin-dependent diabetes.  What if Glass had enabled them to join and watch over her -- or a pediatrician -- remotely?  To guide an able-bodied helper, or the school nurse, or even their daughter directly, in the care and management of her Type 1 Diabetes glucose level checking and insulin injection?  Of course, to realize this vision we'd need a lot more sensors around.  One could argue for the need for wireless-capable "smart" sensors, like a Bluetooth glucometer -- but really for the first version, if you had Glass, all you'd need is the observer/supervisor to be able to see the glucometer reading from the Google Glass video feed.  Indeed if Google Glass's optical character recognition grew to be as good as its speech recognition, it might be able to parse that data and realize that, in the field of view, there was some alphanumeric data to process and hand off to other pieces of software.

Sure, the dream is to someday have a holographic, artificially intelligent doctor, like The Doctor on Star Trek: Voyager.  But what if, as a stepping stone to artificial intelligence, we might do something that might be described as "augmented intelligence"?

I for one, welcome our new google overlords.  The MIT Technology review had theorized that innovation and technology destroys jobs.  In a June 2013 article by David Rotman, he points out that Professor Erik Brynjolfsson and collaborator Andrew McAfee note that industrial robotics and other advances in productivity (self-driving cars anyone?) will create reductions in demand for many types of human workers.  But this is predicated upon the notion that somehow that innovative technology might not augment the capacity for humans to upgrade their mojo, so to speak -- that is, to take these obsolete workers and still render them useful as actors in the physical world.

It's downright Shakespearian if you really take a step back and think about this eventuality for Google Glass -- as the beginning of an API to control human beings.

As You Like It
Act II Scene VII

All the world's a stage,
And all the men and women merely players:
They have their exits and their entrances;
And one man in his time plays many parts,
His acts being seven ages. At first, the infant,
Mewling and puking in the nurse's arms.
And then the whining school-boy, with his satchel
And shining morning face, creeping like snail
Unwillingly to school. And then the lover,
Sighing like furnace, with a woeful ballad
Made to his mistress' eyebrow. Then a soldier,
Full of strange oaths and bearded like the pard,
Jealous in honour, sudden and quick in quarrel,
Seeking the bubble reputation
Even in the cannon's mouth. And then the justice,
In fair round belly with good capon lined,
With eyes severe and beard of formal cut,
Full of wise saws and modern instances;
And so he plays his part. The sixth age shifts
Into the lean and slipper'd pantaloon,
With spectacles on nose and pouch on side,
His youthful hose, well saved, a world too wide
For his shrunk shank; and his big manly voice,
Turning again toward childish treble, pipes
And whistles in his sound. Last scene of all,
That ends this strange eventful history,
Is second childishness and mere oblivion,
Sans teeth, sans eyes, sans taste, sans everything.

No comments:

Post a Comment