Less Talking, More Doing: AI for Impact
If I could sum up my general take on AI and Social Impact: Less talking and more doing. The promise of artificial intelligence to catalyze social change is no longer theoretical—it's here. Yet far too many organizations are still caught in cycles of discussion, hesitation, and white papers rather than stepping into cycles of action, feedback, and iteration. Focus on building and using AI as where we get to impact. While ethical and responsible use discussions are important, they become "real" in the doing of AI building and use.
At HumanServices.ai, we’re not sitting on the sidelines intellectualizing the future—we’re shaping it through practice. The sector doesn’t need another panel discussion about the potential (or risks) of AI. It needs action: tools, prototypes, pilots, and production systems that change the lives of real people. That’s our work - revolutionizing how we approach human services and social innovation. It’s time to move from conceptualizing change to operationalizing it.
The Urgency of Now
The accelerating pace of technological change doesn’t wait for five-year strategic plans. AI, automation, and data analytics are already reshaping how services are accessed, how policy is implemented, and how communities advocate for themselves. Social impact organizations must match this velocity with methods that are just as dynamic. That means fewer static roadmaps and more feedback-driven loops.
Agile project management offers precisely that. With shorter cycles, continuous user feedback, and a bias toward iterative delivery, it shifts the emphasis from planning to learning and doing. Indeed, agile is more than a project management methodology; it’s a culture shift. It’s a commitment to fast cycles of delivery, feedback, and adaptation. It allows us to stay laser-focused on what our clients and communities need—and to adjust rapidly when we learn something new. (See the series on Agile for AI Impact.)
Responsible AI Is Built, Not Theorized
Too often, leaders in the social sector feel paralyzed by the need to get it right the first time—especially when serving vulnerable populations. That caution is understandable, but it shouldn’t lead to inertia. Agile organizations are grounded in the idea that action leads to insight. You won’t know what works until you try, measure, and reflect.
Let’s be clear: building AI tools responsibly doesn’t mean freezing in fear of getting it wrong. It means building with users at the center, embedding feedback loops from the start, and staying open to iteration. If your team isn't regularly asking, “What do our clients need right now?” and acting on that data within weeks—not months—you’re missing an opportunity. The most responsible AI is the one that’s been tested, refined, and reshaped based on real-world use, not theory.
A Product Mindset for Public Good
We take a product mindset to social impact. That means shipping working AI. It means measuring outputs and outcomes. It means delighting our users, even when they’re accessing critical government services or community supports. And it means relentlessly asking: “How do we make this better for the people we serve?” In this way, agile product development isn’t just a set of processes—it’s a mindset. It thrives in cultures where experimentation is rewarded, failure is reframed as learning, and trust is extended to frontline teams.
This culture of empowered teams and iterative action is essential for any organization hoping to harness AI for good. Because AI, at its best, is not a magic box—it’s a tool that reflects the quality of the building process and values we bring to it.
Impact Is Measured in Progress
Ultimately, social impact is not measured in case studies or position papers. It’s measured in progress. In the child who gets the benefits they’re entitled to faster. In the caseworker who can spend more time connecting with people because AI handles the paperwork. In the policymaker who makes better decisions because the data speaks clearly. The lens of making tangible impact should drive the AI for social good conversations above all else.
Get Started—Today
So yes—let’s keep talking about ethics. Let’s keep advocating for equity and transparency. But let’s also roll up our sleeves and build, test, and use AI. Because that’s where the real transformation happens.
The most important step is the next one. Here’s a challenge: Identify one service or project within your organization. Bring together the team. Define a short sprint—two weeks. Listen to a user. Implement a small change. Measure its impact. Debrief. Then do it again. This is how transformation starts: not with a keynote or a report, but with one small, intentional, measurable act of progress.
Let’s stop talking about the future of AI and start building it—today, inclusively, iteratively, and with purpose.
Let’s go!
Author's Note: I wrote this blog in conjunction with Chat-GPT. Transparency in the use of AI is an important principle in the ethical use of AI.