
AI in government: Why smart risk-taking beats playing it safe
About half of federal agency employees are being trained in AI, cloud, and open-source technologies, according to our recent Federal Software Reimagined report. But that learning shouldn’t happen in a vacuum. Employees need opportunities to incorporate their new skills into their day-to-day work.
Building a “safe-to-fail” culture is an effective way to encourage this kind of trial and error. A safe-to-fail culture assumes that experimentation with any new technology carries risk. To mitigate that risk, leaders take steps to create a controlled environment where failures are teaching moments that will not lead to significant consequences.
When employees can experiment with technology without fear, they become more comfortable using it, leading to better rates of adoption. They may also discover new use cases for technology that can drive efficiencies in the agency’s mission-critical work.
But a safe-to-fail culture requires buy-in from an agency’s top leaders. They must provide the time and space for employees to learn and experiment. Leaders should reward employees who try out new applications, even when those experiments don’t pan out in the end.
All experimentation involves risk—but with AI, cloud, and open-source technologies changing so rapidly, not taking risks in software development is a risk in itself.