Note: This blog post is the second part of a two-part series describing my experience using the Google Glass in the medical school setting. The events in this series occurred in January 2014 over a 2-3 week period. Enjoy!
---
How much a few weeks can change things! A few weeks ago, I had been struggling even to keep the Google Glass turned on, but soon enough, I became a whiz (relatively speaking) at the Glass's controls. Let me recap some of the memorable (both good and not so good) moments.
I think by now, at least two dozen of my classmates have tried on this pair of Glass with great excitement. All of them had heard of Google Glass before, so I suppose Google's advertising campaign is successful. After they've put it on, they usually ask me, "So what can it do?!" "Why don't you take a picture?" "OK, Glass. Take a picture." Pause. "Wow! That's so cool!" Even several of my professors came by to try out this new technology. I'm sure I will now forever be known among the faculty as "that Google kid."
Fun and games aside, how did I use the Google Glass in my medical school classes? Let's talk about the positives first. One big plus is that I can take pictures of the whiteboard, lecture recordings, or videos of a small group discussion without seeming too conspicuous. All of the Glass's functions can be touch-activated by tapping and swiping the touch-sensitive rim. Another nifty feature is the built-in translation function. Imagine if you were looking at a poster in the subway that was written in Spanish. With a few taps of the Glass, the poster is now in English. You do not see the Spanish anymore! I tried this out on foreign language publications I found on Pubmed. Cool, huh?
One of the Google Glass's coolest features is also, in my opinion, its biggest drawback: voice controls. In the classroom, in the lecture hall, at the bedside – none of these situations permits a medical student to speak aloud to his or her Glass without being disruptive. The touch controls are not a keyboard, so to take advantage of a Google search during lectures, you are better off using a tablet or laptop. (I can imagine Glass being useful in anatomy lab, though.) Another significant issue I experienced was getting Internet. Boy, was it complicated. You must log into your Google account on a computer and generate a QR code for the specific network, which would then be used to connect the Glass. Having assumed that my Glass could function completely without a computer, I struggled with this for a long time before finally asking for help. One final comment on the hardware: the Glass has a short battery life (about 2 hours) and heats up very quickly (within 15 minutes). Personally, I found it a bit uncomfortable to have a device with a decently high temperature pressed against my right temple for a few hours.
So all in all, I think the Google Glass has the potential to transform medical education, but changes to the current learning environment and infrastructure are definitely required before Glass can become mainstream. For now, I think Glass is a noteworthy and fun addition to student life outside of the classroom, and may yet prove to be the next big milestone enhancement for everyone's social life.
As for me, I have since returned to my Glass-less life. Hope you enjoyed reading. Until next time! "That Google kid," signing off.
An unofficial experiment by Cornell faculty member, Internal Medicine physician, and Google Glass explorer Dr. Henry Wei. Med students and doctors are given a chance to try Google Glass for a few weeks, and blog about what they think. Their hopes, fears, and vision for the future are posted here. Needless to say, the views expressed here are personal and do not reflect those of Cornell University, nor the employers nor spouses nor pets of any of the individuals writing or posting here.
Friday, March 7, 2014
Friday, February 21, 2014
Ok Glass: Google Now, for Health
Google Now is onto something, and I really can't wait to see it applied in healthcare -- Glass or no Glass.
For those who haven't played with it, Google Now is a neat approach at predicting what information you're likely to need, rather than waiting for you to search for it. Here's how they put it:
"Google Now works in the background so you don’t have to. Your information is automatically organized into simple cards that appear just when they’re needed. Because Google Now is ready whenever you are, you can spend less time digging and more time living."
Practically speaking, right now they've implemented what a bunch of 22-year-old engineers in Mountain View, California think you might want to know. Like the weather, the best route to avoid traffic (say, on the 101 freeway), or your favorite team's score. More practically, it'll automatically pull up, say, your flight's boarding pass when it's time, or an upcoming reservation for a restaurant from your Google Calendar. There are a bunch of Cards they've shown examples of so you can get an idea.
One of the limitations of computer-human interaction to date has been the need for you to actually indicate to your smartphone or to the computer what it is you want. As a medical student, rotating at Harvard with Dr. Warner Slack, I noted that he pioneered a lot of what still may seem futuristic in human-computer interaction (on a framework called CONVERSE that sat over M on MUMPS). And yet, there's still this question-answer-question-answer nature to healthcare.
Well it turns out healthcare isn't just about questions and answers.
Part of healthcare is knowing what's about to happen, and what should be happening, right as it happens -- or doesn't happen. In Pediatrics, we doctors call this "Anticipatory Guidance", which is that a lot of things are about to happen that may seem weird to the newcomer, i.e. the parent, but our job as doctors is to help explain what's about to happen -- in the case of pediatrics, routine and normal development stages -- and also what to think about if it doesn't happen.
In a different case, it's what we call the "hand on the doorknob" issue. Most of the important issues come up right as I'm wrapping up with my patient and have my hand on the doorknob. Like, oh, suicidal ideation, or erectile dysfunction, or that chest pain they've been having for the past week on and off. The more that we can anticipate these "hand on the doorknob" issues and elucidate it prior to relying upon a physician to remember to ask about it, or a patient thinking to mention it, the better.
Doctors will differ on their perspective on this, by the way -- though many are shy to admit it, they'd sooner you not tell them about your suicidal ideation, or that chest pain, or whatnot because it slows them down. Never mind that it might kill you, if they didn't hear it, then they're not liable. A good though imprecise litmus test of this is whether your doctor has a tendency to interrupt you, as a patient. There are limits to this of course -- some patients have a habit of being repetitive or redundant in their descriptions and theories as to what's going on -- but by and large if your doctor is doing most of the talking and interrupting you (usually not consciously), then they're probably more interested in getting you in and out of their office and assuming that everything was done correctly.
So where does this leave us, and what does this have to do with Google Glass? It has to do with anticipating what those anticipatory guidance and hand-on-the-doorknob issues will be. And it also has to do with how freakishly easy it ought to be to create Google Now logic and content.
Google Now is somewhat reimiscent of another card-based system, Hypercard. Apple's Hypercard, which predated the World Wide Web, and had a programming language that allowed all sorts of people, from expert to novice, to create all sorts of useful utilities and applications. A friend who worked at EDS Advanced Technology Projects team back in the heyday of Hypercard noted that he had helped build a HyperCard-based personal information manager, Executive Desktop, which predated Microsoft Outlook and even e-mail but managed to accomplish a lot of the core functionality. It was an excellent tool for rapid application development, particularly multimedia-intense applications (which would otherwise be impossible for hobbyists to implement at the time) and its DNA is still evident in still-existing Apple platforms and languages like AppleScript.
So, what does Google now and Cards have to do with healthcare? Envision the following: routine tasks are broken up into a set of intelligent Google Now Cards. Rules are written around their context -- what's the appropriate time for them to display, gracefully and not obtrusively. These rules are effectively Clinical Decision Support rules, driven off of real-time data (or near real-time at least). The contents of the cards are likewise dynamic and driven off of rules. Each card addresses a specific step in a clinical pathway, such that a "stack" of cards can guide you to a successful clinical outcome.
Be it for patients, caregivers, or providers like doctors, now we have a contextually-aware set of prompts that get the user "Just the right information, at just the right time" (Google's phrase).
Some of the basic use cases in health would center on medication alone. Antibiotic administration and checking of levels -- particularly "big guns" like gentamicin -- require peak and trough samples to make sure the antibiotic is at a blood level that is effective to treat serious infections. Using a Google Now approach would generate the potential for simplified, highly graphical views that were timed just right.
Now, if you've ever tried to interrupt a doctor to ask them to do something, or that they've missed something, you're tempting fate. Like NYC Taxi Cab drivers, doctors generally don't like to be told what to do -- whether it's how to get to whatever destination they're going to, of if they've made a clearly wrong turn or even hit a pedestrian. What Google Now has the ability to do is that, with the vernacular of Google Now, make it more mainstream for the doctor to have some contextual information that's worth interrupting them -- or allow them to select specific cards for their own personal stack. For example, most doctors want to know when a troponin level became available, and what the result was -- this is sort of critical since it's a blood test indicating whether a patient is having a myocardial infarction or "burning rubber" as we used to call it on Cardiology rotation. Today we might have a pager show the info, maybe even the troponin level, but a contextually-aware Card might go so far as to then lay out the specific road ahead -- whether oxygen, aspirin, beta-blocker, nitrates had been given yet (depending on the type of M.I.), what the cardiac cath lab schedule looked like for an emergent coronary intervention, and even bed availability in the CCU or medical/surgical ward in the hospital for where the patient needed to be after the procedure.
If you can disaggregate clinical pathways into cards -- and then create a social environment for clinical leadership to share and improve upon them -- then suddenly we can achieve huge changes in quality without waiting for publication and re-interpretation, particularly if clinical outcomes are attached to the cards. In other words, what I envision is an "app store" of sort, where in addition to subjective five-star reviews, you'd also have a sense of who was endorsing the card or the stack of cards, and more importantly, what sort of improvements it produced when it was introduced into an actual clinical setting.
Inevitably, the question will come up: yeah but the FDA or [insert name of regulatory body here] will regulate it. And they should. But one thing that I learned during my time in federal government that like computer software, regulations, too, are written by humans and ultimately malleable. It may take time and forces that appear to be impossible to change, but by and large, policy and regulation are human constructs -- they are not physical laws of the universe. So to this extent, if something can genuinely save lives, then as long as the guardrails are in place to prevent the quacks and vaporware vendors from putting something dangerous into patients' or doctors' hands, there ought to be a way for the FDA to approve this type of clinical decision support technology.
I think both Google Now and Google Glass -- particularly with this universe of "Cards" -- is ripe for healthcare to take advantage of now. To be fair, it'll probably happen for patients first, before doctors, because when it's your own life at stake, of course you're willing to have a Google Now card appear on your screen. When it's your doctor trying to see more patients more quickly, they have a real and unfortunate competing interest for their attention, which is the next patient. The reflex response from doctors is "yeah, I'd do this if I got paid for it." What they really mean is "yeah I wish I got paid more for doing less" because when insurers ante up and say "yes, okay, we'll pay more for better clinical outcomes", not every doctor steps up and signs up for that deal. And it's not in a mean-hearted way -- particularly in primary care, where compensation is 2x or 3x lower than their med school classmates who went into procedural or surgical subspecialties, primary docs are flooded with an endless list of items to get done and not enough hours in the day to do it all effectively. So if anywhere, clinical decision support via Google Now is likely to take root first in coordinated care environments -- patient-centered medical homes, Accountable Care Organizations, basically team-based environments where coordination and better clinical outcomes are rewarded -- not just how fast and how many patients you can churn through a procedure suite.
What sort of Google Now cards would you like to see? As a patient? As a caregiver? As a doctor?
For those who haven't played with it, Google Now is a neat approach at predicting what information you're likely to need, rather than waiting for you to search for it. Here's how they put it:
"Google Now works in the background so you don’t have to. Your information is automatically organized into simple cards that appear just when they’re needed. Because Google Now is ready whenever you are, you can spend less time digging and more time living."
Practically speaking, right now they've implemented what a bunch of 22-year-old engineers in Mountain View, California think you might want to know. Like the weather, the best route to avoid traffic (say, on the 101 freeway), or your favorite team's score. More practically, it'll automatically pull up, say, your flight's boarding pass when it's time, or an upcoming reservation for a restaurant from your Google Calendar. There are a bunch of Cards they've shown examples of so you can get an idea.
One of the limitations of computer-human interaction to date has been the need for you to actually indicate to your smartphone or to the computer what it is you want. As a medical student, rotating at Harvard with Dr. Warner Slack, I noted that he pioneered a lot of what still may seem futuristic in human-computer interaction (on a framework called CONVERSE that sat over M on MUMPS). And yet, there's still this question-answer-question-answer nature to healthcare.
Well it turns out healthcare isn't just about questions and answers.
Part of healthcare is knowing what's about to happen, and what should be happening, right as it happens -- or doesn't happen. In Pediatrics, we doctors call this "Anticipatory Guidance", which is that a lot of things are about to happen that may seem weird to the newcomer, i.e. the parent, but our job as doctors is to help explain what's about to happen -- in the case of pediatrics, routine and normal development stages -- and also what to think about if it doesn't happen.
In a different case, it's what we call the "hand on the doorknob" issue. Most of the important issues come up right as I'm wrapping up with my patient and have my hand on the doorknob. Like, oh, suicidal ideation, or erectile dysfunction, or that chest pain they've been having for the past week on and off. The more that we can anticipate these "hand on the doorknob" issues and elucidate it prior to relying upon a physician to remember to ask about it, or a patient thinking to mention it, the better.
Doctors will differ on their perspective on this, by the way -- though many are shy to admit it, they'd sooner you not tell them about your suicidal ideation, or that chest pain, or whatnot because it slows them down. Never mind that it might kill you, if they didn't hear it, then they're not liable. A good though imprecise litmus test of this is whether your doctor has a tendency to interrupt you, as a patient. There are limits to this of course -- some patients have a habit of being repetitive or redundant in their descriptions and theories as to what's going on -- but by and large if your doctor is doing most of the talking and interrupting you (usually not consciously), then they're probably more interested in getting you in and out of their office and assuming that everything was done correctly.
So where does this leave us, and what does this have to do with Google Glass? It has to do with anticipating what those anticipatory guidance and hand-on-the-doorknob issues will be. And it also has to do with how freakishly easy it ought to be to create Google Now logic and content.
Google Now is somewhat reimiscent of another card-based system, Hypercard. Apple's Hypercard, which predated the World Wide Web, and had a programming language that allowed all sorts of people, from expert to novice, to create all sorts of useful utilities and applications. A friend who worked at EDS Advanced Technology Projects team back in the heyday of Hypercard noted that he had helped build a HyperCard-based personal information manager, Executive Desktop, which predated Microsoft Outlook and even e-mail but managed to accomplish a lot of the core functionality. It was an excellent tool for rapid application development, particularly multimedia-intense applications (which would otherwise be impossible for hobbyists to implement at the time) and its DNA is still evident in still-existing Apple platforms and languages like AppleScript.
So, what does Google now and Cards have to do with healthcare? Envision the following: routine tasks are broken up into a set of intelligent Google Now Cards. Rules are written around their context -- what's the appropriate time for them to display, gracefully and not obtrusively. These rules are effectively Clinical Decision Support rules, driven off of real-time data (or near real-time at least). The contents of the cards are likewise dynamic and driven off of rules. Each card addresses a specific step in a clinical pathway, such that a "stack" of cards can guide you to a successful clinical outcome.
Be it for patients, caregivers, or providers like doctors, now we have a contextually-aware set of prompts that get the user "Just the right information, at just the right time" (Google's phrase).
Some of the basic use cases in health would center on medication alone. Antibiotic administration and checking of levels -- particularly "big guns" like gentamicin -- require peak and trough samples to make sure the antibiotic is at a blood level that is effective to treat serious infections. Using a Google Now approach would generate the potential for simplified, highly graphical views that were timed just right.
Now, if you've ever tried to interrupt a doctor to ask them to do something, or that they've missed something, you're tempting fate. Like NYC Taxi Cab drivers, doctors generally don't like to be told what to do -- whether it's how to get to whatever destination they're going to, of if they've made a clearly wrong turn or even hit a pedestrian. What Google Now has the ability to do is that, with the vernacular of Google Now, make it more mainstream for the doctor to have some contextual information that's worth interrupting them -- or allow them to select specific cards for their own personal stack. For example, most doctors want to know when a troponin level became available, and what the result was -- this is sort of critical since it's a blood test indicating whether a patient is having a myocardial infarction or "burning rubber" as we used to call it on Cardiology rotation. Today we might have a pager show the info, maybe even the troponin level, but a contextually-aware Card might go so far as to then lay out the specific road ahead -- whether oxygen, aspirin, beta-blocker, nitrates had been given yet (depending on the type of M.I.), what the cardiac cath lab schedule looked like for an emergent coronary intervention, and even bed availability in the CCU or medical/surgical ward in the hospital for where the patient needed to be after the procedure.
If you can disaggregate clinical pathways into cards -- and then create a social environment for clinical leadership to share and improve upon them -- then suddenly we can achieve huge changes in quality without waiting for publication and re-interpretation, particularly if clinical outcomes are attached to the cards. In other words, what I envision is an "app store" of sort, where in addition to subjective five-star reviews, you'd also have a sense of who was endorsing the card or the stack of cards, and more importantly, what sort of improvements it produced when it was introduced into an actual clinical setting.
Inevitably, the question will come up: yeah but the FDA or [insert name of regulatory body here] will regulate it. And they should. But one thing that I learned during my time in federal government that like computer software, regulations, too, are written by humans and ultimately malleable. It may take time and forces that appear to be impossible to change, but by and large, policy and regulation are human constructs -- they are not physical laws of the universe. So to this extent, if something can genuinely save lives, then as long as the guardrails are in place to prevent the quacks and vaporware vendors from putting something dangerous into patients' or doctors' hands, there ought to be a way for the FDA to approve this type of clinical decision support technology.
I think both Google Now and Google Glass -- particularly with this universe of "Cards" -- is ripe for healthcare to take advantage of now. To be fair, it'll probably happen for patients first, before doctors, because when it's your own life at stake, of course you're willing to have a Google Now card appear on your screen. When it's your doctor trying to see more patients more quickly, they have a real and unfortunate competing interest for their attention, which is the next patient. The reflex response from doctors is "yeah, I'd do this if I got paid for it." What they really mean is "yeah I wish I got paid more for doing less" because when insurers ante up and say "yes, okay, we'll pay more for better clinical outcomes", not every doctor steps up and signs up for that deal. And it's not in a mean-hearted way -- particularly in primary care, where compensation is 2x or 3x lower than their med school classmates who went into procedural or surgical subspecialties, primary docs are flooded with an endless list of items to get done and not enough hours in the day to do it all effectively. So if anywhere, clinical decision support via Google Now is likely to take root first in coordinated care environments -- patient-centered medical homes, Accountable Care Organizations, basically team-based environments where coordination and better clinical outcomes are rewarded -- not just how fast and how many patients you can churn through a procedure suite.
What sort of Google Now cards would you like to see? As a patient? As a caregiver? As a doctor?
Tuesday, February 18, 2014
Ok Glass, Win a Contest
http://medtechboston.com/faq/
The MedTech Boston Google Glass Challenge
(Shamlessly copied from their FAQ)
Frequently Asked Questions: The MedTech Boston Google Glass Challenge
| Do I need to have a Google Glass to participate? | ||||||||
| No, you do not need to have a Google Glass. This contest is an “ideas contest”—just imagine your idea, put it in writing and submit it at http://medtechboston.com/submit-ggchallenge/. | ||||||||
| Can I submit more than one idea? | ||||||||
| Yes, you can submit as many ideas as you like. Each will be judged by our panel of medical and programming experts. For details on the judges, clickhttp://medtechboston.com/ggc-judges/. | ||||||||
| But I don’t know anything about programming… | ||||||||
| You don’t need to know anything about programming. In the qualifying round all you need to do is describe how you would use Google Glass to improve medicine in some way. It is helpful if you familiarize yourself with the capabilities of Google Glass, though. You can find details here http://www.google.com/glass/start/what-it-does/, herehttp://en.wikipedia.org/wiki/Google_Glass and here https://www.youtube.com/watch?v=cAediAS9ADM. | ||||||||
| How does the contest work? | ||||||||
The contest is split into two rounds. The first round is the qualifying round and is open to anyone who has an idea (or many ideas) to submit. The qualifying round starts on February 10, 2014 and runs until March 22, 2014. During this time, you can submit your idea here http://medtechboston.com/submit-ggchallenge/. You can submit as many ideas as you like. We will hold three rounds of judging during the qualifying round. Semi-finalists will be announced one week following the end of each round, on the schedule below.
Once all semi-finalists have been chosen the judges will choose the winners of the four prizes. Winners will be announced April 21, 2014.
| ||||||||
| By submitting my idea to the contest, do I give up my intellectual property? | ||||||||
| NO, you do not. You retain all rights in your ideas and inventions. However, you must understand and agree that this contest is conducted in a public forum, and that your idea will be publicized on our website, read and discussed by our judges and even picked up by other media outlets. The purpose of the contest is to get doctors and entrepreneurs together to think about how we can improve medicine using Google Glass. Secrecy is antithetical to this aim. For details, see the contest rules athttp://medtechboston.com/ggc-rules/. | ||||||||
| But my idea is so amazing, I want to keep it a secret | ||||||||
| If you truly believe your idea is that amazing, you should quit your job, mortgage your house and start a company to develop the idea. Of course, that’s not how startups work. Ask any venture capitalist what is the most important factor for success and they’ll tell you it all comes down to the team and whether they can execute the idea. The idea itself is secondary. Google started out as a search engine, and even though they weren’t first to market (the weren’t even tenth; remember Alta-Vista, Lycos and Inktomi?), Google executed better than everyone else. Facebook wasn’t the first social network (remember Friendster and MySpace?), and iTunes wasn’t the first music service (remember Napster?). This challenge will allow you to receive valuable feedback from top experts and may even gain you the publicity you need to gather a team around you and execute. However, if you just want to keep your idea secret then this contest isn’t for you. |
Monday, February 10, 2014
Google Glass and Med School Class - Part 1
Note: This blog post is the first part of a two-part series describing my experience using the Google Glass in the medical school setting. The events in this series occurred in January 2014 over a 2-3 week period. Enjoy and please let me know what you think!
---
I bet not many people (yet) can say that they have my kind of morning routine: wake up, brush teeth, get dressed, eat breakfast, and PUT ON GOOGLE GLASS! But that is exactly what I did this morning for the first time.
I am very fortunate to be the recipient of a Google Glass (a pair of Google Glasses?), which I will be testing out in the medical school environment for the next 2-3 weeks. Last night, I picked them up from Dr. Henry Wei, who had first offered me this opportunity. He briefly demonstrated the sophisticated controls, which, from the outside, looked like a series of head-bobbing and temple-tapping. (I had a moment's thought that Dr. Wei was actually Cyclops from the X-Men.) "There's definitely a learning curve," I was warned. Boy, he was not kidding!
First a little bit about me. I'm just your average medical student. As a second-year at Cornell, I am still in my "classroom years," only seeing patients one afternoon a week. I'm also not much of a techno-geek (which I mean in the most admirable way!). I know my keyboard shortcuts and can get around Best Buy, but when some of my best friends start talking about Corsairs and DeathAdders, I just nod along. So I think this will be a good test for the Google Glass, to see how well it can be applied to those of us who are not too hot or too cold in tech-savviness.
So as I walked to school, I tried to get Google Glass to do some cool stuff. "OK, Glass," I said. Nothing happened. The purple screen floated mockingly above my right eye's visual field, depicting the time and a prompt to say, "OK, Glass." "OK, Glass," I said again, a little bit louder this time. Still nothing. Weird, last night it had worked perfectly to bring up a scrolling menu that allowed me to verbally take a photo. "OK, GLASS!!!" I think I must have shouted, because people glanced at me uneasily, bunched their jackets, and briskly hurried past. A nearby flock of New York pigeons also took flight. Maybe I should try this again, I thought to myself, when I am away from loud traffic and high-strung Upper-East-Siders.
At school, I walked into my 8:00 class. PBL (Problem-Based Learning) was always a fairly relaxed, yet oddly educational, atmosphere. Our instructor, Dr. F, was about to continue leading a case discussion about a sick patient with kidney problems. As I walked in, the 9 other students turned their heads. "Whoa, what's that you got there?" "Is that Google Glass?" "Can I try it on?" So I spent the first 5 minutes of class passing the Glass around to those who were interested, trying not to think about my embarrassingly futile commute. It was good that people only wanted to put it on and see the floating screen. Had they asked me to take a picture, Google a fact, or – heaven forbid – shoot Cyclops lasers, I would have had to sheepishly decline.
As the class progressed, I tried to figure out the Glass's controls. A flick of head upward turned it on. An invisible, touch-sensitive panel on the side of the frame let me scroll up, down, left, and right by simply swiping in that direction. A tap of the frame allowed me to select options. I turned to face my friend next to me. Click. I took a picture of him! "What might you look for in this patient if you suspected Alport Syndrome?" asked Dr. F, possibly noticing my inattentiveness. "Alport Syndrome patients have a mutation in the Collagen IV gene, which can also result in impaired vision and deafness," I rattled off, subconsciously pulling a Hermione Granger. "Very good! Did you just look that up on your Google Glass?" No, no I didn't. Because I don't know how.
When will I ever get the hang of this? I thought to myself. Oh well, at least I'll always know the time and never be late to class. My Google Glass will make sure of that.
To be continued!
---
I bet not many people (yet) can say that they have my kind of morning routine: wake up, brush teeth, get dressed, eat breakfast, and PUT ON GOOGLE GLASS! But that is exactly what I did this morning for the first time.
I am very fortunate to be the recipient of a Google Glass (a pair of Google Glasses?), which I will be testing out in the medical school environment for the next 2-3 weeks. Last night, I picked them up from Dr. Henry Wei, who had first offered me this opportunity. He briefly demonstrated the sophisticated controls, which, from the outside, looked like a series of head-bobbing and temple-tapping. (I had a moment's thought that Dr. Wei was actually Cyclops from the X-Men.) "There's definitely a learning curve," I was warned. Boy, he was not kidding!
First a little bit about me. I'm just your average medical student. As a second-year at Cornell, I am still in my "classroom years," only seeing patients one afternoon a week. I'm also not much of a techno-geek (which I mean in the most admirable way!). I know my keyboard shortcuts and can get around Best Buy, but when some of my best friends start talking about Corsairs and DeathAdders, I just nod along. So I think this will be a good test for the Google Glass, to see how well it can be applied to those of us who are not too hot or too cold in tech-savviness.
So as I walked to school, I tried to get Google Glass to do some cool stuff. "OK, Glass," I said. Nothing happened. The purple screen floated mockingly above my right eye's visual field, depicting the time and a prompt to say, "OK, Glass." "OK, Glass," I said again, a little bit louder this time. Still nothing. Weird, last night it had worked perfectly to bring up a scrolling menu that allowed me to verbally take a photo. "OK, GLASS!!!" I think I must have shouted, because people glanced at me uneasily, bunched their jackets, and briskly hurried past. A nearby flock of New York pigeons also took flight. Maybe I should try this again, I thought to myself, when I am away from loud traffic and high-strung Upper-East-Siders.
At school, I walked into my 8:00 class. PBL (Problem-Based Learning) was always a fairly relaxed, yet oddly educational, atmosphere. Our instructor, Dr. F, was about to continue leading a case discussion about a sick patient with kidney problems. As I walked in, the 9 other students turned their heads. "Whoa, what's that you got there?" "Is that Google Glass?" "Can I try it on?" So I spent the first 5 minutes of class passing the Glass around to those who were interested, trying not to think about my embarrassingly futile commute. It was good that people only wanted to put it on and see the floating screen. Had they asked me to take a picture, Google a fact, or – heaven forbid – shoot Cyclops lasers, I would have had to sheepishly decline.
As the class progressed, I tried to figure out the Glass's controls. A flick of head upward turned it on. An invisible, touch-sensitive panel on the side of the frame let me scroll up, down, left, and right by simply swiping in that direction. A tap of the frame allowed me to select options. I turned to face my friend next to me. Click. I took a picture of him! "What might you look for in this patient if you suspected Alport Syndrome?" asked Dr. F, possibly noticing my inattentiveness. "Alport Syndrome patients have a mutation in the Collagen IV gene, which can also result in impaired vision and deafness," I rattled off, subconsciously pulling a Hermione Granger. "Very good! Did you just look that up on your Google Glass?" No, no I didn't. Because I don't know how.
When will I ever get the hang of this? I thought to myself. Oh well, at least I'll always know the time and never be late to class. My Google Glass will make sure of that.
To be continued!
Wednesday, January 22, 2014
Google Glass is the API to control humans (and it's not as bad as it sounds)
With Google's acquisition of Nest, it occurs to me that Google Glass may have more far-reaching ambitions for human society than most of us can imagine right now. And it has less to do with serving the needs of the person wearing the Glass, than controlling that person to do things that the Glass is telling the person to do. So cynics can rest comfortable with their critiques of whether or not Glass is useful -- it may not be, to them. But that's okay. In other words, Glass may be the beginning of the API -- application programming interface -- to control humans.
Hold on, wait, it's not as bad as it sounds! Really! What I mean is augmented intelligence, not mindless drone control. Okay, maybe it's a little creepy.
A few years ago I was lucky enough to be in Cambridge, Mass. to meet Sep Kamvar, a former Googler and now a Professor at the MIT Media Lab. Amongst other fascinating things he described (e.g. he felt iGoogle had made some missteps with respect to the curation of its early developer community members), he posited the possibility of programming languages to interact with real-world, meat-space problems, not just objects and methods in digital space.
For example (my example, not his) just imagine if Andy Samberg and Justin Timberlake could write Javascript to do the following, not just in virtual space, but in real world:
box.cut(hole)
box.put(junk)
box.open
APIs to interface with the real world could be thought of as, maybe, robotics. But think of where Amazon has gone with Mechanical Turk -- Human Intelligence Tasks where you can describe a task for a human to perform, and interface with a computer system. As Amazon describes this, it is Artificial Artificial Intelligence. I like a new platform called TaskRabbit, where you can basically find help to do stuff in real life -- household errands, skilled tasks, etc. TaskRabbit can basically match up extra labor supply with sporadic demand.
So what if the end-game for Google Glass isn't just an interface for the wearer to be the user? What if, instead, it's the best way to seamless strap a control mechanism onto another human?
Creepy right? Well, consider the use cases in health care. What if you have someone who's a relatively skilled and proficient with their hands, but only a novice with respect to healthcare. What if you could assign tasks and walk that person through how to do anything from ask the appropriate questions and look for the appropriate signs (physical manifestations) to diagnose malaria? Or to administer the right treatment? Or to perform a surgical or invasive procedure? What would it mean to global society if suddenly we didn't have to take a whole decade or more to train a doctor, but could mobilize masses of lower-skilled workers and put Google Glass on them to enhance their skill set and knowledge set, augmented by an instruction set designed by the world's best doctors and engineers?
In the same way that you can imagine Cyrano de Bergerac whispering and coaching Christian beneath Roxane's balcony as to what to say, what if someone -- or something -- on the other side of Google Glass could enhance the wearer's abilities beyond what was ordinarily humanly possible in a world where they'd otherwise need to take the time in school to learn and train on whatever it is they were supposed to do?
If you've ever watched the TV show, Chuck, or even better, a 1969 Disney live-action classic called "The Computer Who Wore Tennis Shoes", you'll recognize the similarities here. In those two examples, a human being was magically gifted with ungodly amounts of knowledge that they could recall instantly. In the TV show Chuck, Zach Levi's titular character was not only able to "flash" on knowledge, but eventually do Matrix-style instant-learning of physical and real-world skills -- say, martial arts fighting styles, flips, fencing skills or how to speak a new language.
And, just so we don't forget the implications for healthcare, check out 7:40 in this compilation of clips from Chuck, where Chuck "flashes" and instantly learns surgical skills to be able to safely extract a bullet from a spy-friend's leg.
Oh, and for those of you who somehow missed the last decade, here's the scene from the Matrix where Keanu Reeve's character, Neo, learns jiujitsu in an instant. By the way, everyone remember's Neo's line "I know Kung Fu" from that scene. The real important line slips away almost undetected in that scene, when Tank says: "Ten hours straight... he's a machine." Neo has been boosted -- programmed -- to become super-human -- machine-like.
So I know we'll have our share of thoughts on how great Google Glass will be as a heads-up display for the wearer of Glass to, say, look up information, or get contextual alerts. But I think the gang at Google X maybe has greater ambitions -- and that is, to have a mechanism to see what the user is seeing (the front-facing camera), where their attention might be fixed (the wink camera), what they're hearing (the speech-processing microphone) and finally a way to "control" or transmit instructions to the user (the prism display and the earphone/bone conduction audio setup). Even in its most basic form, it can be a brilliant, anywhere-capable teleprompter, which anyone from the President of the United States to Anchorman Ron Burgundy would probably be able to acknowledge wields tremendous power.
What if, for example, you could pack at least one Google Glass on every airplane? That in the rare case that an emergency arose, you didn't hear flight attendants asking just if there were medical professionals on the plane, but (especially if there were WiFi on the flight) donning a set of Google Glass and getting walked through -- either by A.I. or a human supervisor or both -- the interactions and actions necessary to safely see the patient to safety?
What if you could keep a Google Glass unit everywhere you had reason to staff people who knew CPR, or to keep a defibrillator around? Sure, maybe people might know CPR or what to do, but what if you could instantly convert any able-bodied person into a CPR-capable helper?
Or what if you could give every school nurse a Google Glass unit? They're already a healthcare professional -- now let the school nurse interface with a team of pediatricians ready to help them handle any more advanced issue, even if only to help triage it? My daughter was in a kindergarten class with another girl who had Type 1 Diabetes. Her parents were at wits end, having to constantly come in to accompany her on field trips, or in visit with the nurse and help guide the care of her insulin-dependent diabetes. What if Glass had enabled them to join and watch over her -- or a pediatrician -- remotely? To guide an able-bodied helper, or the school nurse, or even their daughter directly, in the care and management of her Type 1 Diabetes glucose level checking and insulin injection? Of course, to realize this vision we'd need a lot more sensors around. One could argue for the need for wireless-capable "smart" sensors, like a Bluetooth glucometer -- but really for the first version, if you had Glass, all you'd need is the observer/supervisor to be able to see the glucometer reading from the Google Glass video feed. Indeed if Google Glass's optical character recognition grew to be as good as its speech recognition, it might be able to parse that data and realize that, in the field of view, there was some alphanumeric data to process and hand off to other pieces of software.
Sure, the dream is to someday have a holographic, artificially intelligent doctor, like The Doctor on Star Trek: Voyager. But what if, as a stepping stone to artificial intelligence, we might do something that might be described as "augmented intelligence"?
I for one, welcome our new google overlords. The MIT Technology review had theorized that innovation and technology destroys jobs. In a June 2013 article by David Rotman, he points out that Professor Erik Brynjolfsson and collaborator Andrew McAfee note that industrial robotics and other advances in productivity (self-driving cars anyone?) will create reductions in demand for many types of human workers. But this is predicated upon the notion that somehow that innovative technology might not augment the capacity for humans to upgrade their mojo, so to speak -- that is, to take these obsolete workers and still render them useful as actors in the physical world.
It's downright Shakespearian if you really take a step back and think about this eventuality for Google Glass -- as the beginning of an API to control human beings.
As You Like It
Act II Scene VII
All the world's a stage,
And all the men and women merely players:
They have their exits and their entrances;
And one man in his time plays many parts,
His acts being seven ages. At first, the infant,
Mewling and puking in the nurse's arms.
And then the whining school-boy, with his satchel
And shining morning face, creeping like snail
Unwillingly to school. And then the lover,
Sighing like furnace, with a woeful ballad
Made to his mistress' eyebrow. Then a soldier,
Full of strange oaths and bearded like the pard,
Jealous in honour, sudden and quick in quarrel,
Seeking the bubble reputation
Even in the cannon's mouth. And then the justice,
In fair round belly with good capon lined,
With eyes severe and beard of formal cut,
Full of wise saws and modern instances;
And so he plays his part. The sixth age shifts
Into the lean and slipper'd pantaloon,
With spectacles on nose and pouch on side,
His youthful hose, well saved, a world too wide
For his shrunk shank; and his big manly voice,
Turning again toward childish treble, pipes
And whistles in his sound. Last scene of all,
That ends this strange eventful history,
Is second childishness and mere oblivion,
Sans teeth, sans eyes, sans taste, sans everything.
Hold on, wait, it's not as bad as it sounds! Really! What I mean is augmented intelligence, not mindless drone control. Okay, maybe it's a little creepy.
A few years ago I was lucky enough to be in Cambridge, Mass. to meet Sep Kamvar, a former Googler and now a Professor at the MIT Media Lab. Amongst other fascinating things he described (e.g. he felt iGoogle had made some missteps with respect to the curation of its early developer community members), he posited the possibility of programming languages to interact with real-world, meat-space problems, not just objects and methods in digital space.
For example (my example, not his) just imagine if Andy Samberg and Justin Timberlake could write Javascript to do the following, not just in virtual space, but in real world:
box.cut(hole)
box.put(junk)
box.open
APIs to interface with the real world could be thought of as, maybe, robotics. But think of where Amazon has gone with Mechanical Turk -- Human Intelligence Tasks where you can describe a task for a human to perform, and interface with a computer system. As Amazon describes this, it is Artificial Artificial Intelligence. I like a new platform called TaskRabbit, where you can basically find help to do stuff in real life -- household errands, skilled tasks, etc. TaskRabbit can basically match up extra labor supply with sporadic demand.
So what if the end-game for Google Glass isn't just an interface for the wearer to be the user? What if, instead, it's the best way to seamless strap a control mechanism onto another human?
Creepy right? Well, consider the use cases in health care. What if you have someone who's a relatively skilled and proficient with their hands, but only a novice with respect to healthcare. What if you could assign tasks and walk that person through how to do anything from ask the appropriate questions and look for the appropriate signs (physical manifestations) to diagnose malaria? Or to administer the right treatment? Or to perform a surgical or invasive procedure? What would it mean to global society if suddenly we didn't have to take a whole decade or more to train a doctor, but could mobilize masses of lower-skilled workers and put Google Glass on them to enhance their skill set and knowledge set, augmented by an instruction set designed by the world's best doctors and engineers?
In the same way that you can imagine Cyrano de Bergerac whispering and coaching Christian beneath Roxane's balcony as to what to say, what if someone -- or something -- on the other side of Google Glass could enhance the wearer's abilities beyond what was ordinarily humanly possible in a world where they'd otherwise need to take the time in school to learn and train on whatever it is they were supposed to do?
If you've ever watched the TV show, Chuck, or even better, a 1969 Disney live-action classic called "The Computer Who Wore Tennis Shoes", you'll recognize the similarities here. In those two examples, a human being was magically gifted with ungodly amounts of knowledge that they could recall instantly. In the TV show Chuck, Zach Levi's titular character was not only able to "flash" on knowledge, but eventually do Matrix-style instant-learning of physical and real-world skills -- say, martial arts fighting styles, flips, fencing skills or how to speak a new language.
And, just so we don't forget the implications for healthcare, check out 7:40 in this compilation of clips from Chuck, where Chuck "flashes" and instantly learns surgical skills to be able to safely extract a bullet from a spy-friend's leg.
Oh, and for those of you who somehow missed the last decade, here's the scene from the Matrix where Keanu Reeve's character, Neo, learns jiujitsu in an instant. By the way, everyone remember's Neo's line "I know Kung Fu" from that scene. The real important line slips away almost undetected in that scene, when Tank says: "Ten hours straight... he's a machine." Neo has been boosted -- programmed -- to become super-human -- machine-like.
So I know we'll have our share of thoughts on how great Google Glass will be as a heads-up display for the wearer of Glass to, say, look up information, or get contextual alerts. But I think the gang at Google X maybe has greater ambitions -- and that is, to have a mechanism to see what the user is seeing (the front-facing camera), where their attention might be fixed (the wink camera), what they're hearing (the speech-processing microphone) and finally a way to "control" or transmit instructions to the user (the prism display and the earphone/bone conduction audio setup). Even in its most basic form, it can be a brilliant, anywhere-capable teleprompter, which anyone from the President of the United States to Anchorman Ron Burgundy would probably be able to acknowledge wields tremendous power.
What if, for example, you could pack at least one Google Glass on every airplane? That in the rare case that an emergency arose, you didn't hear flight attendants asking just if there were medical professionals on the plane, but (especially if there were WiFi on the flight) donning a set of Google Glass and getting walked through -- either by A.I. or a human supervisor or both -- the interactions and actions necessary to safely see the patient to safety?
What if you could keep a Google Glass unit everywhere you had reason to staff people who knew CPR, or to keep a defibrillator around? Sure, maybe people might know CPR or what to do, but what if you could instantly convert any able-bodied person into a CPR-capable helper?
Or what if you could give every school nurse a Google Glass unit? They're already a healthcare professional -- now let the school nurse interface with a team of pediatricians ready to help them handle any more advanced issue, even if only to help triage it? My daughter was in a kindergarten class with another girl who had Type 1 Diabetes. Her parents were at wits end, having to constantly come in to accompany her on field trips, or in visit with the nurse and help guide the care of her insulin-dependent diabetes. What if Glass had enabled them to join and watch over her -- or a pediatrician -- remotely? To guide an able-bodied helper, or the school nurse, or even their daughter directly, in the care and management of her Type 1 Diabetes glucose level checking and insulin injection? Of course, to realize this vision we'd need a lot more sensors around. One could argue for the need for wireless-capable "smart" sensors, like a Bluetooth glucometer -- but really for the first version, if you had Glass, all you'd need is the observer/supervisor to be able to see the glucometer reading from the Google Glass video feed. Indeed if Google Glass's optical character recognition grew to be as good as its speech recognition, it might be able to parse that data and realize that, in the field of view, there was some alphanumeric data to process and hand off to other pieces of software.
Sure, the dream is to someday have a holographic, artificially intelligent doctor, like The Doctor on Star Trek: Voyager. But what if, as a stepping stone to artificial intelligence, we might do something that might be described as "augmented intelligence"?
I for one, welcome our new google overlords. The MIT Technology review had theorized that innovation and technology destroys jobs. In a June 2013 article by David Rotman, he points out that Professor Erik Brynjolfsson and collaborator Andrew McAfee note that industrial robotics and other advances in productivity (self-driving cars anyone?) will create reductions in demand for many types of human workers. But this is predicated upon the notion that somehow that innovative technology might not augment the capacity for humans to upgrade their mojo, so to speak -- that is, to take these obsolete workers and still render them useful as actors in the physical world.
It's downright Shakespearian if you really take a step back and think about this eventuality for Google Glass -- as the beginning of an API to control human beings.
As You Like It
Act II Scene VII
All the world's a stage,
And all the men and women merely players:
They have their exits and their entrances;
And one man in his time plays many parts,
His acts being seven ages. At first, the infant,
Mewling and puking in the nurse's arms.
And then the whining school-boy, with his satchel
And shining morning face, creeping like snail
Unwillingly to school. And then the lover,
Sighing like furnace, with a woeful ballad
Made to his mistress' eyebrow. Then a soldier,
Full of strange oaths and bearded like the pard,
Jealous in honour, sudden and quick in quarrel,
Seeking the bubble reputation
Even in the cannon's mouth. And then the justice,
In fair round belly with good capon lined,
With eyes severe and beard of formal cut,
Full of wise saws and modern instances;
And so he plays his part. The sixth age shifts
Into the lean and slipper'd pantaloon,
With spectacles on nose and pouch on side,
His youthful hose, well saved, a world too wide
For his shrunk shank; and his big manly voice,
Turning again toward childish treble, pipes
And whistles in his sound. Last scene of all,
That ends this strange eventful history,
Is second childishness and mere oblivion,
Sans teeth, sans eyes, sans taste, sans everything.
Monday, January 13, 2014
Ok Glass: scribe for me
Human interaction is probably going to change because of Google Glass and/or derivative technology. I think it's easy to overlook the speech recognition baked into Google Glass; it's really quite stunning (and to be fair I've been impressed by Apple's Siri as well, along with the Mac OS X continuous speech recognition capabilities for dictation). Terms like "pheochromocytoma" (a type of tumor that precipitously secretes adrenaline into your bloodstream) are handled with aplomb. We think of this as a nice-to-have, but really this is a critical aspect for clinical medicine -- how frail are we, as human doctors, in recognizing everything the patient has said? If the patient mutters under their breath that they've had ankle pain when they're on a fluoroquinolone, does the physician brain even recognize they said that and tie it into the issue at hand as a possible side effect?
Scribing, it turns out, is an important function. Katie Hafner writes about it in yesterday's New York Times, in an article titled "A Busy Doctor's Right Hand, Ever Ready to Type". An ENT surgeon and close friend of mine has often yearned for some sort of system that could automatically scribe your entire interaction with a patient -- or the very least the highlights. When you have a scope jammed up someone's nose it's not really a great time to be scribbling something. Charting, it turns out, is a pain.
Writing physician notes -- or typing them while your patient is talking -- can be a good process in and of itself though. For the non-doctors out there, it's important to understand the difference between the type of medical chart note written by a medical student or trainee, and one written by a full-fledged attending doc (or more senior resident).
When medical students first start out, they learn a structure, to be complete and avoid missing something. This can be agonizing to listen to because they're working their way through a very standard structure, which almost literally goes head to toe. As a result, their medical documentation also is very complete to the point of containing a lot of stuff that really doesn't weigh into our diagnostic decision making -- like a census surveyor trying to catalog every detail of your illness.
By comparison, when you listen to an attending physician take a history from a patient (i.e. interviewing them), they're problem-focused. In addition to already having a good sense of the patient's history, they'll zoom in on hypotheses going through their head -- we call this differential diagnosis -- and ask questions to refine those hypotheses. One example of this process is "Guess the Dictator/Sit-com character: http://www.smalltime.com/Dictator, although that's more of a decision tree than a probabilistic Bayesian process.
Anyhow, the point is that the note the attending scribbles down appears brief -- one of my favorites from a legendary cardiologist at NY Presbyterian - Cornell read, simply, "LASIX!". But it reflects a thought process, a sort of filtration and re-prioritization of what matters.
Google Glass, therefore, could be a very interesting scribe. But a computer could be a scribe, too, right? Well, this is where the display comes in. Like a human scribe in the exam room, who knows just enough to be clever, Glass could point out simple things -- like, say, inconsistent stories in a pain medication-seeking patient. Or search against common mispronounciations of drugs to speed up the game of "I take this pill, I think it's called, umm...." These sorts of things are handy for a doc to be able to glance at as they come up.
In a more advanced form, you'd tie this sort of natural language processing to start parsing out and tokenizing the text. The reason for this is that you could start applying Clinical Decision Support rules. Clinical Decision Support is what separates quality measures reporting from action and improving outcomes. As Dr. Jacob Reider at HHS/ONC might put it, it's like the difference between knowing how to give kids a C- grade vs. giving teachers a way to help those kids improve their grades.
So in addition to scribing, Glass should eventually be a nice mechanism to point out "wow they just described Lyme disease... you want to ask this other question to clinch it?" I think of systems like Dxplain for that, to improve diagnostic certainty. Other systems have aimed at diagnosis as well, and certainly there's a fascination with artificially-intelligent diagnosticians. IBM Watson is a trendy example of this.
But it turns out that another arena of Clinical Decision Support is just common stuff that docs forget. Like remembering to prescribe a medication called an ACE-inhibitor in a diabetic with early kidney damage called microalbuminuria. I once led a chart review study of patient records looking at the cause of gaps in care, like this example, to see whether it was the doctor who had failed to prescribe the drug, or if the patient had failed to fill the prescription. While we think of patient non-adherence as a huge issue, it turns out that if the doctor forgets or neglects to write the prescription, there's no chance for the patient to be non-adherent. And, as the study turned out, doctors just not remembering to refill medications was all too common -- the medication was nowhere to be found in the chart, rather than the medication written in the chart but the patient not picking up the prescription.
This begets a whole user interface and usability issue -- what's the best way to offer a tip or correction to a doc? Too many and they get alert fatigue. Too few and they'll just keep making the mistakes they're destined to make. Glass offers a great way to slip a message in unobtrusively in the heads-up display, to the point of even remembering just to ask another question. Likewise, Glass could listen for the appropriate action as the result of the prompt/alert; e.g. once you said "Mrs. [Patient], I'd like to start you on a drug called lisinopril" Glass would recognize that you had fulfilled the clinical decision support prompt and dismiss it for you.
So speech recognitions is, I believe, not only a critical part of Glass, but an incredible enabler of a potentially much more fluid experience between doctors and patients -- and, if driven by Clinical Decision Support, a whole lot better for quality as well.
(Next up: medical student James X. Wang. Fear not: we'll get to the neurologist a little later!)
Scribing, it turns out, is an important function. Katie Hafner writes about it in yesterday's New York Times, in an article titled "A Busy Doctor's Right Hand, Ever Ready to Type". An ENT surgeon and close friend of mine has often yearned for some sort of system that could automatically scribe your entire interaction with a patient -- or the very least the highlights. When you have a scope jammed up someone's nose it's not really a great time to be scribbling something. Charting, it turns out, is a pain.
Writing physician notes -- or typing them while your patient is talking -- can be a good process in and of itself though. For the non-doctors out there, it's important to understand the difference between the type of medical chart note written by a medical student or trainee, and one written by a full-fledged attending doc (or more senior resident).
When medical students first start out, they learn a structure, to be complete and avoid missing something. This can be agonizing to listen to because they're working their way through a very standard structure, which almost literally goes head to toe. As a result, their medical documentation also is very complete to the point of containing a lot of stuff that really doesn't weigh into our diagnostic decision making -- like a census surveyor trying to catalog every detail of your illness.
By comparison, when you listen to an attending physician take a history from a patient (i.e. interviewing them), they're problem-focused. In addition to already having a good sense of the patient's history, they'll zoom in on hypotheses going through their head -- we call this differential diagnosis -- and ask questions to refine those hypotheses. One example of this process is "Guess the Dictator/Sit-com character: http://www.smalltime.com/Dictator, although that's more of a decision tree than a probabilistic Bayesian process.
Anyhow, the point is that the note the attending scribbles down appears brief -- one of my favorites from a legendary cardiologist at NY Presbyterian - Cornell read, simply, "LASIX!". But it reflects a thought process, a sort of filtration and re-prioritization of what matters.
Google Glass, therefore, could be a very interesting scribe. But a computer could be a scribe, too, right? Well, this is where the display comes in. Like a human scribe in the exam room, who knows just enough to be clever, Glass could point out simple things -- like, say, inconsistent stories in a pain medication-seeking patient. Or search against common mispronounciations of drugs to speed up the game of "I take this pill, I think it's called, umm...." These sorts of things are handy for a doc to be able to glance at as they come up.
In a more advanced form, you'd tie this sort of natural language processing to start parsing out and tokenizing the text. The reason for this is that you could start applying Clinical Decision Support rules. Clinical Decision Support is what separates quality measures reporting from action and improving outcomes. As Dr. Jacob Reider at HHS/ONC might put it, it's like the difference between knowing how to give kids a C- grade vs. giving teachers a way to help those kids improve their grades.
So in addition to scribing, Glass should eventually be a nice mechanism to point out "wow they just described Lyme disease... you want to ask this other question to clinch it?" I think of systems like Dxplain for that, to improve diagnostic certainty. Other systems have aimed at diagnosis as well, and certainly there's a fascination with artificially-intelligent diagnosticians. IBM Watson is a trendy example of this.
But it turns out that another arena of Clinical Decision Support is just common stuff that docs forget. Like remembering to prescribe a medication called an ACE-inhibitor in a diabetic with early kidney damage called microalbuminuria. I once led a chart review study of patient records looking at the cause of gaps in care, like this example, to see whether it was the doctor who had failed to prescribe the drug, or if the patient had failed to fill the prescription. While we think of patient non-adherence as a huge issue, it turns out that if the doctor forgets or neglects to write the prescription, there's no chance for the patient to be non-adherent. And, as the study turned out, doctors just not remembering to refill medications was all too common -- the medication was nowhere to be found in the chart, rather than the medication written in the chart but the patient not picking up the prescription.
This begets a whole user interface and usability issue -- what's the best way to offer a tip or correction to a doc? Too many and they get alert fatigue. Too few and they'll just keep making the mistakes they're destined to make. Glass offers a great way to slip a message in unobtrusively in the heads-up display, to the point of even remembering just to ask another question. Likewise, Glass could listen for the appropriate action as the result of the prompt/alert; e.g. once you said "Mrs. [Patient], I'd like to start you on a drug called lisinopril" Glass would recognize that you had fulfilled the clinical decision support prompt and dismiss it for you.
So speech recognitions is, I believe, not only a critical part of Glass, but an incredible enabler of a potentially much more fluid experience between doctors and patients -- and, if driven by Clinical Decision Support, a whole lot better for quality as well.
(Next up: medical student James X. Wang. Fear not: we'll get to the neurologist a little later!)
Sunday, January 5, 2014
OK, Glass: Start a Timer
When Dr. Wei offered to let me try Google Glass I was unquestionably excited. It’s not every day that you get to try out a piece of hardware billed as the first ever mass-market wearable computer. But, as someone fascinated with and regularly engaged in using new technology, I’d read dozens of reviews arguing that Google Glass is not the prototype for future wearable computers. I won’t rehash those arguments but I do want to make some general points about wearable computers and their use in scientific and medical contexts.
I am a 3rd year MD-PhD student, meaning I spend a lot of my time wearing gloves, mixing stuff in test tubes, setting timers, and waiting for “science” to happen. For me, the natural first destination to try out Glass was therefore in the lab. “OK, Glass. Start a 5-minute timer,” I said aloud, ignoring sidelong glances from curious colleagues. Glass stared back at me blankly and did nothing. It took about 10 minutes of fiddling (and some input from Benji) to figure out how to do anything with Glass.
Once I got Glass working, I found that it had limited utility in the lab. For instance, sometimes I save time by pulling out my iPhone to take pictures of labeled test tubes. While taking photos with Glass is trivial, retrieving them and zooming in far enough to see the labels is difficult. Looking up information or doing calculation was also problematic. Even when Glass understands you and searches Google for the right thing, it insists on reading search results out loud to you—certainly not the most efficient way to retrieve information!
I was most excited to read documents on Glass. For instance, I wanted to see if I could read an experimental protocol on Glass as I was doing the experiment. In practice there was no easy way to load documents onto Glass—and anyway it is a strain on the eyes to read long-form content off of the Glass screen (the built in NY Times app is almost unusable for this reason).
Leaving aside the user-interface issues that will surely be fixed in the coming months, I want to focus on two choices Google made that will limit how Glass can be used.
Here is how you actually start a timer:
(1) Tap on the side of Google Glass (or look up) to activate the screen and microphone.
(2) Say “OK, Glass. Start a timer.”
(3) Use your finger to scroll through a clunky hour/minute/second wheel to the desired time.
(4) Tap on the side to open the context menu and select “Start Timer.”
Notice that that you have to touch Glass multiple times. In fact, most verbal commands require you to also interact with the device using your fingers. This is a non-starter when working in the lab because you must take off gloves, play with Glass, and then put gloves back on. The clunky user interface can be fixed, but requiring finger navigation makes Glass unusable in any setting where gloves are involved, e.g. in the lab or during surgery. Glass will only be useful in these settings if Google can drastically improve the voice recognition software. This brings us to a general point about wearable computers in a medical and scientific context: the richer the hands-free interface, the greater the utility in the lab or sterile field. Google has a long way to go before the verbal interface alone can drive the device.
Equally important to hands-free interfaces is how Glass enhances the world around you. In short: Glass doesn’t. It provides a small HUD (heads-up-display) in the upper right hand corner of the world that can notify you when, for instance, you have a new email or a phone call. It cannot superimpose content onto the world, the way that some other wearable computer products do. This means that Glass can’t help a construction worker position a beam, or indicate which test tubes a scientist has already added reagents to, or overlay a CT scan onto a patient’s body to guide a surgeon’s scalpel. This lack of world-enhancement was surprising and I sincerely hope that Glass is working on a different version of Glass that can accomplish some of the truly futuristic possibilities that people have been dreaming about for decades.
As a laboratory scientist, future doctor and someone fiercely optimistic about how technology can make science and medicine faster, better, cheaper and more accurate, I was unimpressed by what Glass had to offer but remain excited to see how wearable computers will enhance the laboratory and the clinic.
I am a 3rd year MD-PhD student, meaning I spend a lot of my time wearing gloves, mixing stuff in test tubes, setting timers, and waiting for “science” to happen. For me, the natural first destination to try out Glass was therefore in the lab. “OK, Glass. Start a 5-minute timer,” I said aloud, ignoring sidelong glances from curious colleagues. Glass stared back at me blankly and did nothing. It took about 10 minutes of fiddling (and some input from Benji) to figure out how to do anything with Glass.
Once I got Glass working, I found that it had limited utility in the lab. For instance, sometimes I save time by pulling out my iPhone to take pictures of labeled test tubes. While taking photos with Glass is trivial, retrieving them and zooming in far enough to see the labels is difficult. Looking up information or doing calculation was also problematic. Even when Glass understands you and searches Google for the right thing, it insists on reading search results out loud to you—certainly not the most efficient way to retrieve information!
I was most excited to read documents on Glass. For instance, I wanted to see if I could read an experimental protocol on Glass as I was doing the experiment. In practice there was no easy way to load documents onto Glass—and anyway it is a strain on the eyes to read long-form content off of the Glass screen (the built in NY Times app is almost unusable for this reason).
Leaving aside the user-interface issues that will surely be fixed in the coming months, I want to focus on two choices Google made that will limit how Glass can be used.
Here is how you actually start a timer:
(1) Tap on the side of Google Glass (or look up) to activate the screen and microphone.
(2) Say “OK, Glass. Start a timer.”
(3) Use your finger to scroll through a clunky hour/minute/second wheel to the desired time.
(4) Tap on the side to open the context menu and select “Start Timer.”
Notice that that you have to touch Glass multiple times. In fact, most verbal commands require you to also interact with the device using your fingers. This is a non-starter when working in the lab because you must take off gloves, play with Glass, and then put gloves back on. The clunky user interface can be fixed, but requiring finger navigation makes Glass unusable in any setting where gloves are involved, e.g. in the lab or during surgery. Glass will only be useful in these settings if Google can drastically improve the voice recognition software. This brings us to a general point about wearable computers in a medical and scientific context: the richer the hands-free interface, the greater the utility in the lab or sterile field. Google has a long way to go before the verbal interface alone can drive the device.
Equally important to hands-free interfaces is how Glass enhances the world around you. In short: Glass doesn’t. It provides a small HUD (heads-up-display) in the upper right hand corner of the world that can notify you when, for instance, you have a new email or a phone call. It cannot superimpose content onto the world, the way that some other wearable computer products do. This means that Glass can’t help a construction worker position a beam, or indicate which test tubes a scientist has already added reagents to, or overlay a CT scan onto a patient’s body to guide a surgeon’s scalpel. This lack of world-enhancement was surprising and I sincerely hope that Glass is working on a different version of Glass that can accomplish some of the truly futuristic possibilities that people have been dreaming about for decades.
As a laboratory scientist, future doctor and someone fiercely optimistic about how technology can make science and medicine faster, better, cheaper and more accurate, I was unimpressed by what Glass had to offer but remain excited to see how wearable computers will enhance the laboratory and the clinic.
Ron Gejman
@rongejman
Ok Glass: teach a doctor to intubate.
Happy New Year! This post is about Academic Emergency Medicine & Google Glass, and its potential application in supervising trainees... and then winds up at Atul Gawande's observation about the need & lack of coaching for surgeons and medical proceduralists.
Last week I had the opportunity to sit down with an academic emergency medicine physician from NYC, who commented on the sheer lack of tools to help supervise certain medical procedures being performed by Emergency Medicine residents (physicians in training).
One of the most critical scenarios is intubation -- where docs jam a breathing tube down the throat and you hope that they've hit the right place. Best case, they've landed in the trachea, right before divides into two parts -- the bronchi -- in the lungs. Worst case, they've landed in the esophagus, and instead of delivering oxygen to the lungs and helping the patient live, they're basically providing oxygen to the stomach.
I've seen attendings supervise this, particularly in anesthesia (who are masters of airway management -- this is often under-appreciated by patients). The resident will use their intubation scope to lift the tongue up and out of the way, and then the attending will stand behind them like an umpire behind a catcher at a baseball game. And, like the umpire, if they have a line of sight (the resident ducks their head out of the way briefly), they'll be able to see right as the tip of the endotracheal tube is about to enter the trachea. But it's a pain because in that fleeting moment as the attending takes a look, the resident has to move their head out of the way.
Enter Google Glass. If the camera is close enough to the line of sight of the resident physician doing the intubation, they can get a shot of the ET tube as they try to land it in the trachea. And either via Google Hangouts video-conferencing, or via the MyGlass app that can mirror what's going on in the Glass unit,
The same goes for multiple procedural specialties, from an emergency needle decompression of tension pneumothorax, to a lumbar puncture, to the more mundane insertion of an IV line. As an attending physician, I could be at another clinical site, in a call room, or I could even be at home in my jammies at home, woken up in the middle of the night, to quickly log in and lend an extra pair of eyes to the resident -- or intern -- performing the procedure in the middle of the night.
Or, imagine if there were a supervisor centralized across geographies -- what if Google Helpouts (https://helpouts.google.com) weren't just for consumers, but for professional continuing education as well? What if the nations' best doctors for XYZ procedure were available to watch you perform a procedure? You could be 30 years into your career, and want someone to peek over your shoulder (or through your Google Glass), and give you coaching on your technique.
In a 2011 issue of the New Yorker, Atul Gawande has remarked on the utility -- and rarity -- of personal coaches for surgeons and medical proceduralists like interventional cardiologists performing cardiac caths, or gastroenterologists. Link: http://www.newyorker.com/reporting/2011/10/03/111003fa_fact_gawande?currentPage=all
The only problem with this is that it's hard to find docs that want to supervise other docs, and handle the logistics of getting them into the OR or at the exact moment you happen to be doing the procedure (e.g. in an emergency room, for example). And also, as Dr. Gawande points out, there's the bigger issue of medical culture -- that "we may not be ready to accept -- or pay for -- a cadre of people who identify the flaws in the professionals upon whom we rely, and yet hold in confidence what they see."
So let me ask you out there. As a patient: would you rather have a medical or surgical procedure performed on you as-is today, or with one of the country's experts in that procedure able to "peek over the shoulder" of your doctor using Google Glass + Google Helpouts? And: how much would you be willing to pay for that? What if it was a procedure for your child?
Up next (hopefully): a neurologist's views on his time with Google Glass. Stay tuned!
-Dr. Wei
Last week I had the opportunity to sit down with an academic emergency medicine physician from NYC, who commented on the sheer lack of tools to help supervise certain medical procedures being performed by Emergency Medicine residents (physicians in training).
One of the most critical scenarios is intubation -- where docs jam a breathing tube down the throat and you hope that they've hit the right place. Best case, they've landed in the trachea, right before divides into two parts -- the bronchi -- in the lungs. Worst case, they've landed in the esophagus, and instead of delivering oxygen to the lungs and helping the patient live, they're basically providing oxygen to the stomach.
I've seen attendings supervise this, particularly in anesthesia (who are masters of airway management -- this is often under-appreciated by patients). The resident will use their intubation scope to lift the tongue up and out of the way, and then the attending will stand behind them like an umpire behind a catcher at a baseball game. And, like the umpire, if they have a line of sight (the resident ducks their head out of the way briefly), they'll be able to see right as the tip of the endotracheal tube is about to enter the trachea. But it's a pain because in that fleeting moment as the attending takes a look, the resident has to move their head out of the way.
Enter Google Glass. If the camera is close enough to the line of sight of the resident physician doing the intubation, they can get a shot of the ET tube as they try to land it in the trachea. And either via Google Hangouts video-conferencing, or via the MyGlass app that can mirror what's going on in the Glass unit,
The same goes for multiple procedural specialties, from an emergency needle decompression of tension pneumothorax, to a lumbar puncture, to the more mundane insertion of an IV line. As an attending physician, I could be at another clinical site, in a call room, or I could even be at home in my jammies at home, woken up in the middle of the night, to quickly log in and lend an extra pair of eyes to the resident -- or intern -- performing the procedure in the middle of the night.
Or, imagine if there were a supervisor centralized across geographies -- what if Google Helpouts (https://helpouts.google.com) weren't just for consumers, but for professional continuing education as well? What if the nations' best doctors for XYZ procedure were available to watch you perform a procedure? You could be 30 years into your career, and want someone to peek over your shoulder (or through your Google Glass), and give you coaching on your technique.
In a 2011 issue of the New Yorker, Atul Gawande has remarked on the utility -- and rarity -- of personal coaches for surgeons and medical proceduralists like interventional cardiologists performing cardiac caths, or gastroenterologists. Link: http://www.newyorker.com/reporting/2011/10/03/111003fa_fact_gawande?currentPage=all
The only problem with this is that it's hard to find docs that want to supervise other docs, and handle the logistics of getting them into the OR or at the exact moment you happen to be doing the procedure (e.g. in an emergency room, for example). And also, as Dr. Gawande points out, there's the bigger issue of medical culture -- that "we may not be ready to accept -- or pay for -- a cadre of people who identify the flaws in the professionals upon whom we rely, and yet hold in confidence what they see."
So let me ask you out there. As a patient: would you rather have a medical or surgical procedure performed on you as-is today, or with one of the country's experts in that procedure able to "peek over the shoulder" of your doctor using Google Glass + Google Helpouts? And: how much would you be willing to pay for that? What if it was a procedure for your child?
Up next (hopefully): a neurologist's views on his time with Google Glass. Stay tuned!
-Dr. Wei
Subscribe to:
Comments (Atom)