Headset healthcare29 October 2018
Augmented and mixed-reality platforms are helping surgeons prepare for and carry out increasingly complex procedures. Abi Millar hears from Dr Hans-Jürgen von Lücken, senior physician at Hamburg’s Kath. Marienkrankenhaus, and Jakub Wlasny, lead developer at apoQlar, about the potential for adding a new layer of reality to the surgical theatre.
Could augmented reality (AR) be the next big thing within the healthcare field? Over the past few years, there has been a wave of startups in the field, focusing on everything from neuro-assistance for autistic people (Brain Power) to easier blood transfusions (AccuVein). Perhaps most excitingly, it could change the face of surgery, replacing many of the tools necessary today with a single, wearable piece of equipment.
The possibility first emerged in 2013, when Dr Rafael Grossmann became the first person to use Google Glass during a surgical procedure. Since then, many new players have entered the field with a view to developing AR for the operating table.
These have included Medsights Tech, which uses image-reconstructing technology to give surgeons ‘X-ray vision’, and EchoPixel, which offers advanced medical visualisation software. Most recently, Touch Surgery, a London start-up developing holographic surgery headsets, raised $20 million from the backers of the Oculus headset.
“New applications – some that we can’t even imagine yet – will help transform surgery and the surgical experience,” said Grossmann, presciently, in a 2013 interview.
One of the leaders in the field is the German company apoQlar, which is developing a software tool named Virtual Surgery Intelligence (VSI). It uses artificial intelligence to render MRI and CT images in 3D. When a surgeon puts on the AR headset, the 3D images merge virtually with the patient, giving the surgeon a new level of anatomical detail. This scan can be controlled with gestures and speech commands.
– Dr Hans-Jürgen von Lücken
“It displays medical images in a 3D hologram, which the user can freely manipulate around the physical environment,” says Jakub Wlasny, lead developer at apoQlar. “Those images are automatically fixed onto the body of the patient. This means doctors can slice through those holograms in order to explore the patient’s internal structures, without having to look at any specific devices or screens.”
Alongside its surgical uses, the tool can be used preoperatively to prepare for upcoming surgeries, and postoperatively to evaluate the surgical progression. The company is also developing two related tools: VSI Patient Education (for informing patients about their condition) and VSI Education (for training surgeons).
After extensive testing, VSI has recently been introduced to a hospital in Hamburg, where surgeons in the head and neck department are using it as a post-operative and preoperative assistant.
“We have found [the tool] gives us a much deeper and more detailed insight into the anatomical structures,” says Dr Hans-Jürgen von Lücken, senior physician at Hamburg’s Kath. Marienkrankenhaus. “But the VSI is also a big help for our interns. This way, they can orientate themselves more quickly and the more experienced doctor can better communicate their surgical approaches and procedures.” AR technology involves ‘augmenting’ the real-world environment with various pieces of computer-generated input (sound, images or information), by way of a screen or headset. Although it is related to virtual reality (VR), there is one key difference: VR places the user in a completely immersive world, detached from reality, while AR overlays information onto the physical setting.
Although AR is only beginning to make its way into the healthcare space, its potential is significant. From a medical standpoint, one key advantage is that practitioners don’t need to take their focus off the patient. Currently, surgeons need to switch between various reference points – from the person on the operating table to the medical images and patient data displayed on various screens. With applications such as VSI, they can access all the information they need while the patient remains in their field of vision.
“We want to deliver a more immersive and friendlier way of working with medical images, helping doctors navigate around the surgical site and orientate those anatomical structures. This makes it quicker for them to proceed with surgeries,” says Wlasny. “We also want to eliminate as many of the other tools that are now used as we can, and put everything in one simple device.”
– Jakub Wlasny
As with many other technologies in this field, the VSI uses Microsoft’s headset, HoloLens. This device, which creates blended environments based on mixed reality, enables a live broadcast of the surgery from the surgeon’s perspective, meaning less experienced doctors could receive remote assistance.
Having already been used in many industrial settings, the HoloLens is just now beginning to make waves in surgery. In December, a team of French surgeons used HoloLens to livestream what they called the ‘world’s first surgical intervention performed with a mixed-reality collaborative platform’. And in January, a team at Imperial College London demonstrated how surgeons can use HoloLens headsets while conducting reconstructive lower limb surgery.
“The application of AR technology in the operating theatre has some really exciting possibilities,” said Jon Simmons, who led the Imperial College team. “It could help to simplify and improve the accuracy of some elements of reconstructive procedures. While the technology can’t replace the skill and experience of the clinical team, it could potentially help to reduce the time a patient spends under anaesthetic and reduce the margin for error. We hope that it will allow us to provide more tailored surgical solutions for individual patients.”
At the time of writing, apoQlar is waiting to receive medical certification for VSI, which will enable surgeons to use the tool in the operating room. The company is also working to develop it further, adding new functionalities and areas of application.
Since the tool uses machine-learning algorithms, it will continuously learn and improve, recognising more tissue types and acquiring more knowledge with every operation.
“I hope that in the long term it’ll be a full medical product presented in many different medical facilities, and we hope to deliver as many useful functionalities for different kinds of specialities as possible,” says Wlasny.
It is early days still, but in a few years’ time AR-enhanced surgery could well become the norm. According to a recent review in the Journal of Healthcare Engineering, applications in this space are developing rapidly (albeit with various teething problems that need addressing). The authors concluded that, in the future, AR “will likely serve as an advanced human-computer interface, working in symbiosis with surgeons, allowing them to achieve even better results.”
From apoQlar’s standpoint, there is no doubt that AR will revolutionise current surgical options.
As the company’s founder Sirko Pelzl recently told the Hamburg News, “It’s not a question of whether this will happen, but whether we will lead this revolution.”
Live streaming surgery – the first AR procedure
Dr Rafael Grossmann was the first practitioner to use Google Glass during a surgical procedure. Here he recounts the experience:
“Obviously, one of the main concerns regarding the use of Google Glass during surgery, with live streaming of data, would be to take every measure and to ensure the privacy of the patient’s health information (PHI).
“That’s exactly what I did. Not only did I obtain informed consent about what we were going to attempt (and documented it), but most importantly, I made sure that no recording or transmission of any identifying information was done. The streaming of video and photos, to myself through Google Glass, did not reveal any PHI, or even show the patient’s face.
“By performing and documenting this event, I wanted to show that this device and its platform were intuitive tools that have a great potential in healthcare, and, specifically for surgery, could allow better intra-operative consultations, surgical mentoring and potentiate remote medical education, in a very simple way. The patient involved needed a feeding tube and we chose to place it endoscopically, with a procedure called percutaneous endoscopic gastrostomy (PEG). Since it was the first time, I wanted to do this during a simple and commonly performed procedure, to make sure that my full attention was not diverted from taking excellent care of the patient.
“I arranged for a Google Hangout (HO) between my Glass and a Google account I created ahead of time for this very purpose. The connection is remote. The iPad used as a receiver was just yards away, but it could have been thousands. Before starting the operation, I briefly recorded myself explaining the planned event, and once again, talked about the importance of not revealing any PHI.
“I had Google Glass on at all times, with the HO active throughout the procedure. The live video images that I saw through Google Glass were projected in the iPad screen, remotely. We kept the volume down on purpose. We tried to keep it very simple and straightforward. As I said, even the procedure was a simple one. I was able to show not just the patient’s abdomen, but also the endoscopic view, in a very clever, simple and inexpensive way.
“The whole thing was fairly quick and went very well. We used homemade techniques, so the pictures and video were not optimal, but I think the point stands: We demonstrated Google Glass streaming during live surgery, by a glass explorer surgeon, was possible.”