
Improving Safety Command Usage Among Employees in a Commercial Kitchen through a Virtual Simulation
Project Overview
Background
Kitchen safety commands like “corner,” “behind,” and “hot corner” help prevent struck-by accidents by alerting others to nearby hazards.
At Oregon State University’s commercial dining, student employees were observed using these commands infrequently and improperly, contributing to incidents.
Objectives
Understand why student employees did not use safety commands in the kitchen.
Develop, test, and deploy a virtual simulation game to educate and train employees to use safety commands.
Outcome
-
32% increase in safety command usage.
-
Observed reduction in struck-by injuries.
-
1 commercial kitchen in Oregon State University integrated KitchenSpeak training simulation game in their onboarding process.
Team
-
My role: Human factors engineer, Lead software developer, UX researcher.
-
Team members:
-
2 project supervisors
-
-
Stakeholders: West Hall Dining Center.
Tools Used in this Project

Qualtrics
For Survey development

Unity 3d
For game development

3DS Max
To model the real world kitchen environment from real world images and architect plan.

SQL
For database development to store relevant game data.

R
For data analysis
How Did I Solve the Problem?

Methodology



Contextual Inquiry
Volunteered as an undercover kitchen employee to observe how safety commands are taught and used in practice.



Gap Analysis
Compared the current state of safety command training and emergence to the ideal state, to identify root causes and potential solutions.



Software Development
Developed a non-immersive, browser-based virtual simulation game using Unity 3D and 3ds Max to teach and practice kitchen safety commands.



Survey
To assess usability and validity of the virtual simulation after incorporating feedback from the focus group.



Signal Detection Analysis
To assess the accuracy and reliability of a developed "corner" counting device.



Comparative Analysis
Used to compare "corner" usage between two comercial kitchens to validate the simulation's effectiveness.
Contextual Inquiry
What did I do?
-
Volunteered as an undercover kitchen employee to observe how safety commands are taught and used in practice.
-
Spoke informally with several younger staff members to gather their perceptions of safety commands.
What were the results?
-
Safety command training occurred only once during the initial onboarding session.
-
Young employees rarely used safety commands—they found them unfamiliar and unimportant; one remarked, “It just makes me feel dumb yelling in the kitchen.”
-
These observations led me to suspect that low usage was driven by both a lack of appreciation for the commands’ purpose and insufficient practice.
-
Armed with this insight into the kitchen’s safety system, I proceeded to perform a gap analysis between the current state and the ideal safety-command usage.
What challenges did I face?
-
Maintaining cover without biasing behavior: I couldn’t disclose my research goals, so conversations sometimes felt guarded; I overcame this by using casual icebreakers (e.g., offering help on tasks) and then debriefing immediately after each shift to capture honest reflections.
-
Covert note-taking: Taking real-time notes risked exposure; I relied on brief mental mnemonics during my shift and completed detailed field notes in a private debrief at the end of each day.
Gap Analysis
What did I do?
-
Conceptualized the kitchen’s safety system as a Human Activity System, allowing for a holistic analysis of how safety commands fit within staff workflows.
-
Conducted ethnographic observations to model the current state of the system in action.
-
Interviewed the three kitchen managers to articulate the ideal state, mapping their vision of optimal safety-command usage.
What were the results?
-
Identified a clear gap between current and ideal states, driven by a lack of awareness of the safety commands’ purpose and training methods misaligned with the kitchen’s high-traffic, high-turnover environment.
-
Based on these findings, I recommended and began developing a virtual game–based training tool to provide experiential practice of safety commands in a more engaging, repeatable format.
What challenges did I face?
-
Defining a measurable ideal state: Managers had never specified how often or in what contexts commands should be used. I overcame this by guiding them with targeted “perfect world” questions (e.g., “In an ideal scenario, how frequently should commands be issued during peak service?”).
-
Translating observations into the framework: Mapping real-world behaviors into the Human Activity System proved complex. I iteratively mapped observed actions to system components and validated each iteration with managers.
-
Capturing high-turnover dynamics: The kitchen’s shifting staff made consistent observation difficult. I scheduled short observation blocks across multiple shifts and cross-referenced them with manager interviews to ensure the model reflected true operational rhythms.


Gap Analysis Results
Conceptual Model of Safety Behavior Emergence

Software Development

What did I do?
-
Developed a non-immersive, browser-based virtual simulation game using Unity 3D and 3ds Max to teach and practice kitchen safety commands.
-
Informed the training design with Kolb’s Experiential Learning Theory and Nicholson’s RECIPE model for effective gamification.
What were the results?
-
Conducted two rounds of usability and validity testing to refine the experience.
-
Successfully deployed the finalized simulation in a live commercial kitchen environment for staff training.
What challenges did I face?
-
Midway through development, stakeholders requested that the voice-to-text module work offline for privacy—an unanticipated requirement.
-
I addressed this by transparently communicating technical constraints and adjusting our development plan and timeline, which managed expectations and ultimately resulted in a solution that satisfied stakeholders.
Focus Groups
What did I do?
-
Designed, tested, and deployed a questionnaire to assess the validity and usability of the KitchenSpeak training game, ensuring it met its goal of improving safety command usage.
-
Conducted two rounds of testing:
-
Alpha Test: A focus group of 8 experienced kitchen staff evaluated eight key game aspects—realism of the environment, content and narrative, navigation, information clarity, UI design, speech recognition usability, player choice, and game rules.
-
Beta Test: A broader online survey administered after incorporating Alpha feedback to confirm improvements and gather quantitative usability metrics.
-
What were the results?
-
The Alpha Test revealed several usability issues, notably that participants without gaming experience struggled with the first-person controller navigation.
-
In response, I reduced the viewing angles and slowed movement speeds, which resulted in all participants reporting that navigation became significantly easier in subsequent play sessions.
-
The Beta Test survey confirmed these improvements, with high scores for navigation, overall usability, and engagement.
What challenges did I face?
-
COVID-19 restrictions prevented in-person focus group sessions as originally planned.
-
I adapted by hosting the Alpha Test on Zoom, which allowed me to record participant interactions and review their gameplay afterward—ultimately providing richer insights than a live session might have.
Survey
What did I do?
-
Conducted a Beta Test with 40 participants via an online survey after integrating feedback from the Alpha Test. The survey measured ease of play, the impact of prior gaming experience on gameplay, and engagement levels, three constructs proven to influence game-based training effectiveness.
What were the results?
-
Participants with previous gaming experience completed the simulation faster than those without, yet both groups reported similar ease-of-play on a 5-point Likert scale (3.63/5 for non-gamers; 4.0/5 for gamers).
-
Engagement, assessed both by self-reported immersion (4.7/5) and by in-game activity, was high: out of 160 prompts delivered to 32 participants, 149 (93.1%) received valid responses, with only 11 (6.9%) left unanswered.
What challenges did I face?
-
Defining and quantifying user engagement, a meta-construct composed of multiple dimensions. I reviewed existing engagement-measurement studies to draft survey items, and through seven iterative revisions improved internal consistency (Cronbach’s α) from unacceptably low to an acceptable reliability level.
What did I do?
-
Conceptualized the kitchen’s safety system as a Human Activity System, allowing for a holistic analysis of how safety commands fit within staff workflows.
-
Conducted ethnographic observations to model the current state of the system in action.
-
Interviewed the three kitchen managers to articulate the ideal state, mapping their vision of optimal safety-command usage.
What were the results?
-
Identified a clear gap between current and ideal states, driven by a lack of awareness of the safety commands’ purpose and training methods misaligned with the kitchen’s high-traffic, high-turnover environment.
-
Based on these findings, I recommended and began developing a virtual game–based training tool to provide experiential practice of safety commands in a more engaging, repeatable format.
What challenges did I face?
-
Defining a measurable ideal state: Managers had never specified how often or in what contexts commands should be used. I overcame this by guiding them with targeted “perfect world” questions (e.g., “In an ideal scenario, how frequently should commands be issued during peak service?”).
-
Translating observations into the framework: Mapping real-world behaviors into the Human Activity System proved complex. I iteratively mapped observed actions to system components and validated each iteration with managers.
-
Capturing high-turnover dynamics: The kitchen’s shifting staff made consistent observation difficult. I scheduled short observation blocks across multiple shifts and cross-referenced them with manager interviews to ensure the model reflected true operational rhythms.
Signal Detection Analysis

Corner Counting Device
Comparative Analysis
What did I do?
-
Deployed the “corner”-counting device in two kitchens—one where KitchenSpeak training was used (treatment) and one without it (control). Beforehand, I confirmed both kitchens had comparable workflows to ensure a fair comparison.
What were the results?
-
A paired t-test showed 32 % higher “corner” safety-command usage in the treatment kitchen after KitchenSpeak deployment.
-
In the control kitchen, over a 27-day period without KitchenSpeak, “corner” usage was maintained but gradually declined.
What challenges did I face?
-
The kitchens differed in staff size and the number of blind corners within the device’s detection range. I addressed this by normalizing the raw command counts according to the ratio of employees and monitored corners.
