Don't miss out

Don't miss out

Don't miss out

Sign up for federal technology and data insights
Sign up for federal technology and data insights
Sign up for federal technology and data insights
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Get our newsletter for exclusive articles, research, and more.
Subscribe now

AI in government: Why smart risk-taking beats playing it safe

AI in government: Why smart risk-taking beats playing it safe
Jun 18, 2025
6 min. watch

Federal agencies are investing in AI, cloud, and open-source technologies at scale. According to our Federal Software Reimagined report, about half of federal agency employees are being trained in these areas. But training alone isn’t enough—leaders must create the conditions for employees to safely apply new skills in real-world settings.

In this short video, ICF experts discuss why building a “safe-to-fail” culture is essential for effective AI adoption in government. They share insights from our research and their own experience helping agencies navigate the risks and rewards of emerging technology.

 

What this means for government leaders

A safe-to-fail culture encourages experimentation—recognizing that trying new approaches with AI and cloud technologies will sometimes lead to failure. The key is to create an environment where those failures are learning opportunities, not career risks. This requires visible support from agency leadership, time and space for experimentation, and recognition for employees who take smart risks—even when outcomes are uncertain.

Next in the series

Your mission, modernized.

Subscribe for insights, research, and more on topics like AI-powered government, unlocking the full potential of your data, improving core business processes, and accelerating mission impact.