State Government AI Landscape Analysis Part 1: How We Assess Government AI
The Government AI Landscape Assessment is the first nationwide analysis of how ready state governments are to deploy artificial intelligence in public services. I was happy to lead the field research on this initiative by Code for America in my role as a consultant with HumanServices.ai (HSAI). The analysis allowed each state and the District of Columbia to review our analysis of dozens of policy documents such as Governor Executive orders on AI, the training plans for state employees, and documentation on state technical infrastructure. We also consulted with a national AI Advisory Board to evaluate both the rubric and the evaluation of state government capacity across the criteria. The goal was to go beyond anecdotal stories or tech hype and provide a rigorous, comparative snapshot of how states are preparing for the age of AI.
When we set out to build a rubric to assess state government use of artificial intelligence, we began with a broad, 10-dimension maturity model. That draft framework captured the many organizational, technical, and ethical challenges associated with AI deployment. But as we dug deeper, it became clear that the real question isn’t whether a state is mature in every area of AI development. It’s whether a state is ready.
Through our research and consultations with the Code for America AI Advisory Board and state leaders across the country, we shifted from a 10-part maturity model to a more accessible, actionable, and policy-relevant framework focused on three key dimensions of AI readiness:
1. Leadership and Governance: This dimension examines whether the state has the political will, executive leadership, and governance structures to responsibly guide AI deployment across agencies. It includes the presence of dedicated AI leadership, such as a Chief AI Officer or equivalent, the formation of advisory groups or interagency councils, and the establishment of governance structures that ensure coordination, oversight, and accountability. Strong leadership and clear governance help states chart a unified vision for AI, avoid duplication, and ensure that ethical and equitable use of AI is a shared responsibility across government.
- Early Stage: No designated leadership or governance for AI. Minimal awareness or planning.
- Developing: Task forces or advisory groups exist; some leadership interest; early coordination mechanisms.
- Established: Dedicated leadership (e.g., Chief AI Officer), clear governance frameworks, and coordination across agencies.
- Advanced: Statutory or institutionalized governance, integrated leadership across branches, accountability mechanisms.
2. AI Capacity Building: This dimension assesses how well states are preparing their workforce to understand, manage, and responsibly deploy AI. States that excel in this area are not only training staff—they're also building partnerships with academia and industry, creating specialized programs for public servants, and investing in ongoing skill development. These efforts help agencies build internal expertise and make smarter procurement and implementation decisions.
- Early Stage: No formal AI training/upskilling; limited awareness and internal capacity.
- Developing: Basic awareness training; limited partnerships with universities or private sector for skill-building.
- Established: Comprehensive AI training programs; multiple active partnerships with educational institutions; internal upskilling initiatives.
- Advanced: Advanced training with specialization tracks; robust ecosystem of workforce development partnerships; integration of AI education into standard government workforce development pipelines.
3. Technical Infrastructure & Capabilities: This dimension focuses on whether a state’s technical foundation is strong enough to support responsible AI use. Readiness in this area includes robust data infrastructure, computing resources, and the presence of platforms or systems that allow for cross-agency data sharing and AI experimentation. States further along in this dimension often have centralized cloud environments, strategic partnerships with tech vendors, and clear technical governance to support complex AI implementations.
- Early Stage: Limited technical capacity; underfunded or legacy infrastructure.
- Developing: Some pilot programs, initial infrastructure investments, basic cloud/data capabilities.
- Established: Modernized infrastructure, dedicated teams, and structured technical supports.
- Advanced: Scalable, integrated platforms supporting advanced analytics and AI implementations.
This simplified readiness framework does more than score states. It offers a snapshot of whether they are poised to responsibly explore and implement AI. And it better reflects where most governments are today: early in the journey, but eager to prepare for what’s ahead.
This rubric provided the basis for Code for America's Government AI Landscape Assessment to understand where state governments stand in their adoption of artificial intelligence. Our goal was to provide a snapshot of AI use in government today—and to help public sector leaders prepare for what’s ahead. We conducted in-depth research, engaged with leaders in all 50 states, and collaborated with our AI Advisory Board to build a robust assessment framework. We hope this shift helps policymakers focus on the enabling conditions for success. With the right leadership, direction, and capacity, states can turn AI from hype into help.
Author's Note: I wrote this blog in conjunction with Chat-GPT. Transparency in the use of AI is an important principle in the ethical use of AI.