Perhaps the biggest advance in healthcare IT innovation has been the inexorable evolution of mobile connectivity. The progressive technology drive has always been toward smaller (less obtrusive), faster, smoother, and safer devices. Nowhere is this more evident than in the jump from desktop PCs to laptops to tablets and finally smartphones, giving our hands greater freedom. And now we’re focusing on taking the next step: completely removing our hands from the equation. One ubiquitous example of progress in this direction is the introduction and wide adoption of voice-enabled technology like Siri, Alexa, and Google Assistant, all of which are currently limited to audio input. The next step in this evolution involves adding visual input in the form of head-mounted cameras, visual output displays (tiny optical viewing screens or transparent heads-up displays), and wireless connectivity—together defining the category of smart glasses. Typically, audio capability is present as well, and, increasingly, so is computer processing.
The most well-known of these devices is the seminal, highly publicized Google Glass. Staying true to the aphorism that necessity is the mother of invention, such wearable computer interface devices were conceived with specific use cases in mind. I’ll detail a few below, but, as with the introduction of any new technology, the number of creative applications will no doubt rapidly expand as product evolution advances.
As a primer, there are currently two evolving categories of smart glasses technology: augmented reality (AR) and mixed reality (MR). Augmented reality superimposes a non-manipulative computer-generated image on a user’s view of the real world. For example, the name of a plant will appear as you gaze at it, or a direction arrow will guide you as you navigate an unfamiliar neighborhood. Mixed reality allows the user to interact with the added virtual element. A good example of this is a surgeon superimposing and correctly positioning an x-ray over the patient’s spine during an operation. There are very few mixed reality applications available today, but this is where it’s all headed in the near future.
Humble Beginnings: Google Glass Smart Glasses
Google Glass is a small, lightweight wearable computer with a transparent display for hands-free work. It has been through many iterations, starting with a camera, display, and voice activation (not exactly smart glasses—it was used primarily for remote mentoring/training with no fancy virtual/visual enhancements), and progressing to AR functionality. At one point, Google had ostensibly discontinued work on the device, but they have recently re-energized their developmental efforts. One large limitation of their technology is that it only has a display for the right eye, which limits the extent and quality of the user’s immersion experience.
The Next Generation
Examples of current advanced devices include:
- MR – Microsoft Hololens
- AR – Realwear HMT-1
- AR – Vuzix M300
- AR – Epson Movario
- AR – Lenovo ThinkReality A6
- AR – Google Glass Enterprise Edition 2
Additional Product Feature Considerations
When comparing smart glasses, beyond clinical functionality, one should also consider battery life, waterproofing/resistance, shock resistance, safety certifications, data security provisions, EN 60601 compliance, temperature extreme resistance, head tracking, gesture controls, local device integration, local speech recognition (i.e. for noisy environments), and/or language translation.
Smart glasses communicate wirelessly (cellular networks or Wi-Fi) to the cloud, where their function (i.e. interactivity) is managed using middleware and AR software. An example of this is HPE Visual Remote Guidance (VRG) software, which enables hands-free wearable devices (as well as phones and tablets) to connect via cellular networks or Wi-Fi to the enterprise, used in conjunction with Vuforia’s AR development software.
Healthcare Use Cases
At the end of the day, the value of any healthcare technology is determined by its ability to drive improvement in both efficiency and patient outcomes. To this end, the following are examples of currently employed and rapidly evolving use cases that are showing great promise.
- Augmented Mentoring (Education and Guidance): A physician performing patient rounds or surgery can enable remote expert colleagues, residents, or students to see what they’re seeing and hearing and offer feedback. It can similarly be used for grand rounds. Conversely, a remote category-expert physician can guide an attending resident who is treating a patient. In addition, remote guidance can be applied to aid a technician in the repair/maintenance of capital medical equipment and IT infrastructure.
- Vein Visualization: AccuVein, currently in use in hospitals, can project a map of a patient’s veins onto their skin, making it easier for healthcare workers to find a vein on the first try.
- Surgical Visualization: Medical image processing combined with 3D AR visualization enables orthopedic surgeons to perform minimally invasive procedures more accurately by projecting three dimensional representations of the patient’s internal anatomy into the surgeon’s limited field of view.
- Surgical Planning: Medivis’ combination of AR + AI + imaging enables physicians to visualize the patient’s anatomy holographically, resulting in a much more detailed vision of the body’s architecture than is possible using traditional 2D scans.
- Data/Image Access: A provider could call-up x-rays, test results, anatomical guide, or historical skin lesion images without averting their eyes from a patient or a surgical field.
And, in the words of Marisa Tomei in My Cousin Vinny, there’s more. You can count on an avalanche of new solutions coming down the pike as hardware advances in terms of process speeds and connectivity, and concomitantly evolves into more personally integrated delivery vehicles such as contact lenses and implants—all together enabling extraordinary breakthroughs in software development. And the great news is that the patient is the ultimate beneficiary.