All forms of engineering change the world around us to achieve a desired outcome. Engineering done well means we trust that what we created will do what it is designed to do. We practice our craft by drawing on deep understanding of the physical world and how it can be altered, and then determine through prototyping and analysis if those changes will behave as expected.
Of course, there are an astonishing number of trade-offs that must be navigated throughout this process. But fundamentally, we need to know whether something will do what we build it to do. Consider the example of a bridge:
We do not build bridges that might break—that would be foolishly dangerous. Rather, we gather as much information as possible to know for certain that it will not. For example:
- Properties of the raw materials used in construction. (Steel; concrete)
- Methods of manufacturing its components. (Forging; rolling)
- Arrangement of those components. (Truss structure)
- How the components are attached to one another. (Bolts; welding)
- How the structure interacts with the landscape. (Pilings in the rock)
- Expected load from use. (What drives over it; how often; the forces imparted)
- Anticipated environmental conditions. (Wind; precipitation; seismic events)
By understanding the bridge’s structure in the context of its environment, and assuming a certain level of load—plus a generous safety factor—we confidently determine the bridge will stand.
But what happens when we create something that depends on the capabilities and limitations of people? What if, instead of a physical bridge, we design a human-system where people work together and with technology to accomplish complex, real-time, interdependent goals?
As with the bridge, we learn as much as we can to determine if our system will perform as expected, including:
- The innate physical and cognitive capabilities of operators.
- Knowledge, skills, and abilities developed through training and experience.
- User interfaces, including what, when, and how information is presented.
- User controls, including those requiring physical manipulation.
- Communication and coordination among the team.
- Speed and frequency at which decisions must be made.
- Awareness of what is happening, including what is perceived, understood, and predicted.
- The impact of mistakes.
These factors have an unequivocal effect on whether an operator or team will successfully accomplish their mission—and all these factors are uniquely human. Can you imagine if the strength of a bridge varied based on how much a beam slept the night before? Or if a joint was weakened because two adjacent neighbors did not get along? No one would cross the bridge.
This analogy may be absurd, but it accurately represents the complexity and uncertainty inherent in human-systems. As we build technology that is more and more connected to our bodies and minds, deep understanding of human factors becomes essential. Just as mechanical and civil engineers have the training and expertise to build longstanding bridges, human factors engineers ensure our human-systems remain intact.
When they fail, the consequences can be just as catastrophic as a bridge collapsing into a chasm. There are far too many tragedies illustrating these pitfalls: recent examples include the 737 Max disasters (2019), the USS John S. McCain collision (2017), and a steadily increasing number of autonomous vehicle crashes.
When people physically and cognitively interact with technology, human factors engineers should be involved early and often. They know how to create designs that are usable, adopted, and effective. And just like other engineering disciplines, people’s lives depend on human factors being done right and done well.