Friday, January 11, 2013

Surgeons may use hand gestures to manipulate MRI images in OR

Surgeons may use hand gestures to manipulate MRI images in OR [ Back to EurekAlert! ] Public release date: 10-Jan-2013
[ | E-mail | Share Share ]

Contact: Emil Venere
venere@purdue.edu
765-494-4709
Purdue University

WEST LAFAYETTE, Ind. Doctors may soon be using a system in the operating room that recognizes hand gestures as commands to tell a computer to browse and display medical images of the patient during a surgery.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the procedure and increase the risk of spreading infection-causing bacteria, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

"One of the most ubiquitous pieces of equipment in U.S. surgical units is the computer workstation, which allows access to medical images before and during surgery," he said. "However, computers and their peripherals are difficult to sterilize, and keyboards and mice have been found to be a source of contamination. Also, when nurses or assistants operate the keyboard for the surgeon, the process of conveying information accurately has proven cumbersome and inefficient since spoken dialogue can be time-consuming and leads to frustration and delays in the surgery."

Researchers are creating a system that uses depth-sensing cameras and specialized algorithms to recognize hand gestures as commands to manipulate MRI images on a large display. Recent research to develop the algorithms has been led by doctoral student Mithun George Jacob.

Findings from the research were detailed in a paper published in December in the Journal of the American Medical Informatics Association. The paper was written by Jacob, Wachs and Rebecca A. Packer, an associate professor of neurology and neurosurgery in Purdue's College of Veterinary Medicine.

The researchers validated the system, working with veterinary surgeons to collect a set of gestures natural for clinicians and surgeons. The surgeons were asked to specify functions they perform with MRI images in typical surgeries and to suggest gestures for commands. Ten gestures were chosen: rotate clockwise and counterclockwise; browse left and right; up and down; increase and decrease brightness; and zoom in and out.

Critical to the system's accuracy is the use of "contextual information" in the operating room -- cameras observe the surgeon's torso and head -- to determine and continuously monitor what the surgeon wants to do.

"A major challenge is to endow computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures," Wachs said. "Surgeons will make many gestures during the course of a surgery to communicate with other doctors and nurses. The main challenge is to create algorithms capable of understanding the difference between these gestures and those specifically intended as commands to browse the image-viewing system. We can determine context by looking at the position of the torso and the orientation of the surgeon's gaze. Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images."

The hand-gesture recognition system uses a camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera, found in consumer electronics games that can track a person's hands, maps the surgeon's body in 3-D. Findings showed that integrating context allows the algorithms to accurately distinguish image-browsing commands from unrelated gestures, reducing false positives from 20.8 percent to 2.3 percent.

"If you are getting false alarms 20 percent of the time, that's a big drawback," Wachs said. "So we've been able to greatly improve accuracy in distinguishing commands from other gestures."

The system also has been shown to have a mean accuracy of about 93 percent in translating gestures into specific commands, such as rotating and browsing images.

The algorithm takes into account what phase the surgery is in, which aids in determining the proper context for interpreting the gestures and reducing the browsing time.

"By observing the progress of the surgery we can tell what is the most likely image the surgeon will want to see next," Wachs said.

The researchers also are exploring context using a mock brain biopsy needle that can be tracked in the brain.

"The needle's location provides context, allowing the system to anticipate which images the surgeon will need to see next and reducing the number of gestures needed," Wachs said. "So instead of taking five minutes to browse, the surgeon gets there faster."

Sensors in the surgical needle reveal the position of its tip.

###

The research was supported by the Agency for Healthcare Research and Quality, grant number R03HS019837.

Writer: Emil Venere, 765-494-4709, venere@purdue.edu

Sources: Juan Pablo Wachs, 765 496-7380, jpwachs@purdue.edu

Related Web sites:

Juan Pablo Wachs: http://web.ics.purdue.edu/~jpwachs/

IMAGE CAPTION:

This table shows hand gestures surgeons might use in the operating room to browse and display medical images of the patient during an operation. Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria (Purdue University photo)

A publication-quality photo is available at http://www.purdue.edu/uns/images/2013/gestures-table.jpg

ABSTRACT

HAND-GESTURE-BASED STERILE INTERFACE FOR THE OPERATING ROOM USING CONTEXTUAL CUES FOR THE NAVIGATION OF RADIOLOGICAL IMAGES

Mithun George Jacob1, Juan Pablo Wachs1, Rebecca A Packer2

1School of Industrial Engineering, Purdue University

2Departments of Basic Medical Sciences and Veterinary Clinical Sciences, College of Veterinary Medicine, Purdue University

This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

Note to Journalists: A video related to the research at available at http://youtu.be/jfgX3KGJdsk, and the research paper is available by contacting Emil Venere, 765-494-4709, venere@purdue.edu


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Surgeons may use hand gestures to manipulate MRI images in OR [ Back to EurekAlert! ] Public release date: 10-Jan-2013
[ | E-mail | Share Share ]

Contact: Emil Venere
venere@purdue.edu
765-494-4709
Purdue University

WEST LAFAYETTE, Ind. Doctors may soon be using a system in the operating room that recognizes hand gestures as commands to tell a computer to browse and display medical images of the patient during a surgery.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the procedure and increase the risk of spreading infection-causing bacteria, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

"One of the most ubiquitous pieces of equipment in U.S. surgical units is the computer workstation, which allows access to medical images before and during surgery," he said. "However, computers and their peripherals are difficult to sterilize, and keyboards and mice have been found to be a source of contamination. Also, when nurses or assistants operate the keyboard for the surgeon, the process of conveying information accurately has proven cumbersome and inefficient since spoken dialogue can be time-consuming and leads to frustration and delays in the surgery."

Researchers are creating a system that uses depth-sensing cameras and specialized algorithms to recognize hand gestures as commands to manipulate MRI images on a large display. Recent research to develop the algorithms has been led by doctoral student Mithun George Jacob.

Findings from the research were detailed in a paper published in December in the Journal of the American Medical Informatics Association. The paper was written by Jacob, Wachs and Rebecca A. Packer, an associate professor of neurology and neurosurgery in Purdue's College of Veterinary Medicine.

The researchers validated the system, working with veterinary surgeons to collect a set of gestures natural for clinicians and surgeons. The surgeons were asked to specify functions they perform with MRI images in typical surgeries and to suggest gestures for commands. Ten gestures were chosen: rotate clockwise and counterclockwise; browse left and right; up and down; increase and decrease brightness; and zoom in and out.

Critical to the system's accuracy is the use of "contextual information" in the operating room -- cameras observe the surgeon's torso and head -- to determine and continuously monitor what the surgeon wants to do.

"A major challenge is to endow computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures," Wachs said. "Surgeons will make many gestures during the course of a surgery to communicate with other doctors and nurses. The main challenge is to create algorithms capable of understanding the difference between these gestures and those specifically intended as commands to browse the image-viewing system. We can determine context by looking at the position of the torso and the orientation of the surgeon's gaze. Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images."

The hand-gesture recognition system uses a camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera, found in consumer electronics games that can track a person's hands, maps the surgeon's body in 3-D. Findings showed that integrating context allows the algorithms to accurately distinguish image-browsing commands from unrelated gestures, reducing false positives from 20.8 percent to 2.3 percent.

"If you are getting false alarms 20 percent of the time, that's a big drawback," Wachs said. "So we've been able to greatly improve accuracy in distinguishing commands from other gestures."

The system also has been shown to have a mean accuracy of about 93 percent in translating gestures into specific commands, such as rotating and browsing images.

The algorithm takes into account what phase the surgery is in, which aids in determining the proper context for interpreting the gestures and reducing the browsing time.

"By observing the progress of the surgery we can tell what is the most likely image the surgeon will want to see next," Wachs said.

The researchers also are exploring context using a mock brain biopsy needle that can be tracked in the brain.

"The needle's location provides context, allowing the system to anticipate which images the surgeon will need to see next and reducing the number of gestures needed," Wachs said. "So instead of taking five minutes to browse, the surgeon gets there faster."

Sensors in the surgical needle reveal the position of its tip.

###

The research was supported by the Agency for Healthcare Research and Quality, grant number R03HS019837.

Writer: Emil Venere, 765-494-4709, venere@purdue.edu

Sources: Juan Pablo Wachs, 765 496-7380, jpwachs@purdue.edu

Related Web sites:

Juan Pablo Wachs: http://web.ics.purdue.edu/~jpwachs/

IMAGE CAPTION:

This table shows hand gestures surgeons might use in the operating room to browse and display medical images of the patient during an operation. Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria (Purdue University photo)

A publication-quality photo is available at http://www.purdue.edu/uns/images/2013/gestures-table.jpg

ABSTRACT

HAND-GESTURE-BASED STERILE INTERFACE FOR THE OPERATING ROOM USING CONTEXTUAL CUES FOR THE NAVIGATION OF RADIOLOGICAL IMAGES

Mithun George Jacob1, Juan Pablo Wachs1, Rebecca A Packer2

1School of Industrial Engineering, Purdue University

2Departments of Basic Medical Sciences and Veterinary Clinical Sciences, College of Veterinary Medicine, Purdue University

This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

Note to Journalists: A video related to the research at available at http://youtu.be/jfgX3KGJdsk, and the research paper is available by contacting Emil Venere, 765-494-4709, venere@purdue.edu


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Source: http://www.eurekalert.org/pub_releases/2013-01/pu-smu011013.php

iCarly banana republic gap Victoria Secret Bath And Body Works Dicks Sporting Good office max

No comments:

Post a Comment