AI in government: Why smart risk-taking beats playing it safe
Federal agencies are investing in AI, cloud, and open-source technologies at scale. According to our Federal Software Reimagined report, about half of federal agency employees are being trained in these areas. But training alone isn’t enough—leaders must create the conditions for employees to safely apply new skills in real-world settings.
In this short video, ICF experts discuss why building a “safe-to-fail” culture is essential for effective AI adoption in government. They share insights from our research and their own experience helping agencies navigate the risks and rewards of emerging technology.
What this means for government leaders
A safe-to-fail culture encourages experimentation—recognizing that trying new approaches with AI and cloud technologies will sometimes lead to failure. The key is to create an environment where those failures are learning opportunities, not career risks. This requires visible support from agency leadership, time and space for experimentation, and recognition for employees who take smart risks—even when outcomes are uncertain.
Next in the series