The Washington Post reported the cause of the Hawaii missile alert: a poorly design application interface. Upon selection of a drop-down menu item, a user broadcasted the alert “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Fortunately, it was a false alarm. However, it unnecessarily panicked thousands of Hawaiian residents and tourists – a few of us stateside as well.
Many in the design community have said the error was preventable. If only the application designers had been more deliberate, cautious, and conscientious. They should have tested it with users, too. But we must ask ourselves, given the same design task, would we have created a similar solution?
We rarely recognize widespread, systemic failures before they occur – the Flint water crisis, Equifax’s hacking, a presidential election or two. Sure, these failures should have been prevented and could have been prevented. But they were not. Failures happen.
In retrospect, a simple interface fix may have prevented Hawaii’s false alarm. Yet, the real failure was the 38 minutes taken by officials to issue a retraction. Imagine how the situation may have played out if the alert were immediately followed by the message: “FALSE ALARM – NO MISSILE THREAT. WE APOLOGIZE FOR OUR ERROR. VISIT EHAWAII.GOV FOR MORE INFO.” Annoying, yes. Panic-inducing, no. Rather than hide under bridges and flee into manholes, Hawaiians could have shared a few chuckling groans and indignant eye-rolls.
Good design preserves users’ safety, security, and dignity. In this pursuit, designers sometimes make mistakes. Bad design choices are inevitable, but the effects of bad design choices are not.