Envisioning anesthesia in 2032
It is easy to notice how devoted anesthesiologists are to their job just by talking to them. Although they invest a lot of effort to be highly focused on the patient during surgery, their machine and the environment are still disconnecting the anesthesiologist and the patient. Our vision with this project was to blend the technical and human side of anaesthesiologists, to enhance their working environment by decentralizing the machine and other pieces of information and to give more space for them to think and act.
That approach raised a couple of social questions, too. How much of that job can be automated? What is the human role and value at this specific job? How far should we go with technical enhancement? We envisioned the (human) anesthesiologist as the main role in our future concept.
How might we enhance the work of anesthesiologists by blending machines and the operation room?
PersonalisationThe remote controller as a ‘token’ for personalization.
The right data at the right timeDecentralisation of information allows us to show not just right data but also place it in the right context and time.
AR control & ContextualisationWith simple and intuitive controls, the controller is activating and controlling an AR interface. The interface modules are mapped to the existing OR environment, making it less disturbing and easier to locate contextually.
Reduce (Distraction) Alarm noiseThere are many alarms in the OR which create an environmental overload of sounds, so we made alarms that are isolated from one another and only the anesthesiologist can hear them.
What makes a good anesthesiologist good? Being a good anesthesiologist requires a sensitive and emotional human being who enjoys helping people. Also, as medical professionals, they are capable to think abstract, create hypotheses and establish contact with the patient in a matter of seconds!
Benifits of our system
Research field & observation
Our project partner, Maquet, invited us to visit their R&D studio and production line in Stockholm. That was a great way for us to get to know more about the anesthesia machine and its complexity. After being a little bit wiser, we went to one of the biggest hospitals in Sweden, Dendrite, where we had our first observation inside an Operation room during surgery. Our task was to do ethnographic research on anesthesiologists and try to get the most information from them, while talking with them, documenting steps and getting to know more about anesthesia from their point of view.
The design process was iterative and human-centered. Field research led to a thorough understanding of the context. We talked to anesthesiologists and doctors to learn more about their tasks and needs. This helped us to define a guiding vision for the user’s role.
The information collected during the contextual research was processed in various forms such as task analysis, user journeys and digital and audio recordings of interviews. Our “research walls” enabled us to search for spotlights and identify potential areas for design opportunities.
On-site observation in hospitals was also really important. Every hospital has his own set of rules and bottlenecks which can seem like design opportunities, but in the end, they are just part of the organizational, political or financial restrictions.
Bringing insights to the “board” (Task analysis and user journey)
While observing, we got a detailed overview of an anesthesiologists’ job from onboarding (coming to the hospital, preparing the OR, meeting with the patient) to offboarding (waking up the patient, cleaning up, writing documentation, patient recovery).
Later, we invited anesthesiologists and doctors to evaluate our discovery, polish our understanding and help refresh our ability to spot challenges and opportunities. From our field research and outside research, we created three different opportunities to deal with: Relaxation, Human-Machine Relationship and Accessibility.
The research presentation covered the full research process. Our professor, dr.Brandon Clark, had an idea to introduce research in a way that the audience can follow our work and actively engage with it.
Peers, collaborating partners, and tutors were an audience in our presentation in the workshop format. We presented them with our insights from the research, invited them for co-creation, engaged them in a joint speculation debating ‘what if’ scenarios.
What stood out? (from Research presentation)
We narrowed our research and areas we wanted to focus on, by blending machines and anaesthesiologist. With additional research and prototypes we discovered potential design opportunities by blending operation room and machine interface in a simple and unobtrusive way using Augmented reality.
Anesthesia todayAnesthesia nurse is trapped between two data sources (machine screen and patient body). Hierarchy of data is inaccessible and hard to find. Because of this method, many new Anesthesia nurses develop a more technical side of Anesthesia workflow and during surgery focus more on the screen instead of the patient.
Anesthesia augmentedDecentralisation of data from machines and merging it with the patient. Better mobility inside and outside of Operation room. Better overview of parameters and patient condition. Designing future of Anesthesia with a human in the center.
AR prototyping research
We invited an anesthesiologist to participate in a workshop. Our goal was to see what kind of data is needed for a specific operation and if there’s any data that is used constantly, if so, who should define it and how. These questions helped us to better define the execution of our idea of blending simplicity and complexity.
AR research & development
Human eye ergonomics
Based on research, we defined guidelines and a design system. We wanted to see how to use general research to create foundational guidelines for our AR system.
- Text 20 – Main focus area, should not be disturbed and should be controlled by the nurse (eye tracking).
- Shape 20-40 – Area of sharp prediction of shape and curves, possibility to notice graph curve. Still, the area that shouldn’t be easily invited with AR components.
- Color 40-65 – Area of color noticing, safe to interrupt with color alarms and notifications. Close to peripheral vision, but close enough to the sharp vision for focusing on an area of interest.
- Motion 65-120 – Area ideal for introducing alarms. Using motion from the far peripheral vision to focus sight is a good way to prepare the human brain for a change in the field of view.
From deciding to go with AR and establishing design methods, another big challenge for us was user testing. Our aim for the testing was to determine the right hierarchy and treatment of data in a mixed reality environment. We used video scenarios and Google Cardboard for concept evaluation.
During the project, we did a lot of investigation of how we might rapidly and efficiently prototype for AR to get the most from user testing.
Results of Accessibility research with users for “Alert position”
Reserved for monitoring UI.
1 Poor results
Bad – Users move their head up when they spot notification/alarm.
Good – Ok spotting.
2 Good results
Good -Peripheral vision big enough to easily introduce an alarm. Possibility to expand, but not disturb, enough space to invite CTA next to alarm.
Bad – Alerts may disturb nurses POV while monitoring a patient or during interaction with physical objects.
Good – High spotting area.
Screen & AR User interface
Creating a concept video
In this phase, we stepped out from the design files and the whiteboard. We defined what feature-sets we have, voted around them and put them into a scenario.
We wanted to tell a story that will show the right ‘context of use’ of our system. We took whiteboards and cardboards, scissors and glue, put a storyboard on the wall and created our vision of OR in 2032.
Also, making a believable concept video requires a lot of “fake it till you make it”.