Video Annotation Over Shared Display
There is a long-lasting interest in designing shared displays to support group processes in various task settings. Although video annotation tools are studied for simulated experimental tasks such as games, the use of such technologies in real complex tasks has remained relatively unexplored.
In this project, we evaluated the use of a video annotation tool (based on Microsoft Kinect) in surgical training using an A/B testing method which compared the video annotation condition with the traditional condition with only audio instruction. We recruited 10 participants. Watch the video (below) for learning how the video annotation tool works.
We demonstrated that the video annotation tool increases the communication efficiency between the trainer and the trainee, and consequently decrease both trainers’ cognitive load in giving instruction, and trainees’ cognitive load in performing the surgery and listening to the instruction at the same time. We suggested some design guidelines for improving the video annotation tool to better support a collaborative physical task.
I designed a mixed method study combining quantitative and qualitative methods.
I mentored a team of three undergraduate research assistants for collecting, coding, and analyzing data.
I demonstrated the costs and benefit of using the video annotation tool in terms of the cognitive load it imposes on trainees and trainers, and its effect on the structure of communication between the team members.
I handled all the data collection and the majority of data analysis.
I was the lead author for the AMIA2019 paper.
Types of data I collected and analyzed
video, audio, eye-tracking, electrodermal activity, and questionnaires.
Methods I used
Mixed method (i.e. quantitative and qualitative), A/B testing, observation, ethnographic conversation analysis, statistical tests, open coding, axial coding, and grounded theory.
Computer Supported Cooperative Work; Socio-Technical Systems; Health Informatics; Multimedia Interaction.
Azin Semsar, Hannah McGowan, Yuanyuan Feng, Hamid Zahiri, Ivan George, Timothy Turner, Adrian Park, Helena Mentis, and Andrea Kleinsmith. 2019. Effects of a Virtual Pointer on Trainees’ Cognitive Load and Communication Efficiency in Surgical Training. AMIA Annual Symposium Proceedings 2019. American Medical Informatics Association. 2019.
Yuanyuan Feng, Katie Li, Azin Semsar, Hannah McGowan, Jacqueline Mun, Hamid Zahiri, Ivan George, Adrian Park, Andria Kleinsmith, and Helena Mentis. Communication Cost of Single User Gesturing Tool in Laparoscopic Surgical Training. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 611. ACM, 2019.
Yuanyuan Feng, Hannah McGowan, Azin Semsar, Hamid Zahiri, Ivan George, Timothy Turner, Adrian Park, Andria Kleinsmith, and Helena Mentis. A Virtual Pointer to Support the Adoption of Professional Vision in Laparoscopic Training. International Journal of Computer Assisted Radiology and Surgery, 1-10, May 2018.
Yuanyuan Feng, Hannah McGowan, Azin Semsar, Hamid Zahiri, Ivan George, Timothy Turner, Adrian Park, Andria Kleinsmith, and Helena Mentis. Virtual Pointer for Gaze Guidance in Laparoscopic Surgery. Society of American Gastrointestinal and Endoscopic Surgeons. 2018.