I am a 3rd year MD-PhD student, meaning I spend a lot of my time wearing gloves, mixing stuff in test tubes, setting timers, and waiting for “science” to happen. For me, the natural first destination to try out Glass was therefore in the lab. “OK, Glass. Start a 5-minute timer,” I said aloud, ignoring sidelong glances from curious colleagues. Glass stared back at me blankly and did nothing. It took about 10 minutes of fiddling (and some input from Benji) to figure out how to do anything with Glass.
Once I got Glass working, I found that it had limited utility in the lab. For instance, sometimes I save time by pulling out my iPhone to take pictures of labeled test tubes. While taking photos with Glass is trivial, retrieving them and zooming in far enough to see the labels is difficult. Looking up information or doing calculation was also problematic. Even when Glass understands you and searches Google for the right thing, it insists on reading search results out loud to you—certainly not the most efficient way to retrieve information!
I was most excited to read documents on Glass. For instance, I wanted to see if I could read an experimental protocol on Glass as I was doing the experiment. In practice there was no easy way to load documents onto Glass—and anyway it is a strain on the eyes to read long-form content off of the Glass screen (the built in NY Times app is almost unusable for this reason).
Leaving aside the user-interface issues that will surely be fixed in the coming months, I want to focus on two choices Google made that will limit how Glass can be used.
Here is how you actually start a timer:
(1) Tap on the side of Google Glass (or look up) to activate the screen and microphone.
(2) Say “OK, Glass. Start a timer.”
(3) Use your finger to scroll through a clunky hour/minute/second wheel to the desired time.
(4) Tap on the side to open the context menu and select “Start Timer.”
Notice that that you have to touch Glass multiple times. In fact, most verbal commands require you to also interact with the device using your fingers. This is a non-starter when working in the lab because you must take off gloves, play with Glass, and then put gloves back on. The clunky user interface can be fixed, but requiring finger navigation makes Glass unusable in any setting where gloves are involved, e.g. in the lab or during surgery. Glass will only be useful in these settings if Google can drastically improve the voice recognition software. This brings us to a general point about wearable computers in a medical and scientific context: the richer the hands-free interface, the greater the utility in the lab or sterile field. Google has a long way to go before the verbal interface alone can drive the device.
Equally important to hands-free interfaces is how Glass enhances the world around you. In short: Glass doesn’t. It provides a small HUD (heads-up-display) in the upper right hand corner of the world that can notify you when, for instance, you have a new email or a phone call. It cannot superimpose content onto the world, the way that some other wearable computer products do. This means that Glass can’t help a construction worker position a beam, or indicate which test tubes a scientist has already added reagents to, or overlay a CT scan onto a patient’s body to guide a surgeon’s scalpel. This lack of world-enhancement was surprising and I sincerely hope that Glass is working on a different version of Glass that can accomplish some of the truly futuristic possibilities that people have been dreaming about for decades.
As a laboratory scientist, future doctor and someone fiercely optimistic about how technology can make science and medicine faster, better, cheaper and more accurate, I was unimpressed by what Glass had to offer but remain excited to see how wearable computers will enhance the laboratory and the clinic.
Ron Gejman
@rongejman
No comments:
Post a Comment