Are we focusing too much on the risks of AI and not the potential for good?

Beth Simone Noveck
4 min readOct 31, 2023
This image was generated by Dall-E 3 with the prompt “The AI Dilemma: Governing for Safety or Steering Towards Progress”.

At almost 20,000 words, President Biden’s behemoth executive order on AI mandates a laundry list of actions from federal departments and agencies. The Department of Commerce’s National Institute of Standards and Technology needs to ensure that companies submit safety test results and develop “guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.” The Council of Economic Advisors and the Department of Labor have to address the unemployment risks to workers and how to mitigate them. Other agencies have to identify the potential for AI to be misused to create weapons and biohazards.

While there’s a lot to like here, we have to ask: are we focusing so much on the risks that we are failing to invest in and maximize the potential for AI to do good?

To be sure, the executive order mentions positive goals such as promoting American competitiveness. The order alludes to the fact that AI can transform education through personalized tutoring, and offers the promise of increased productivity. But the narrative surrounding AI — and most of this 111-page executive order — is cautionary.

Like any tool, AI’s impact is shaped by how we wield it. If we fail to ask and answer what we can do to use AI for Good, especially to address our hardest problems, we will fail to realize those opportunities.

Missed Opportunities

The federal government’s AI pronouncement stands in stark contrast to the “responsible experimentation approach” adopted by the City of Boston — the first policy of its kind in the US, which encourages public servants to “try these tools for yourselves to understand their potential.”

While the federal executive order prohibits agencies from banning AI, creates an AI lead in each agency, and calls for providing secure and reliable generative AI capabilities, the approach is far more circumspect than the City’s: “Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments.”

It’s far more alarmist and fails to address, as Boston did, all the ways in which AI could be used to improve governance. AI can simplify complex governmental processes, making them more accessible to the average person in plain English or other languages and in oral formats for those who are low literacy. For example,

Innovate Public Schools, a nonprofit from California focused on supporting parents, is pioneering a project with students in our AI for Impact class at Northeastern University, to simplify and translate the complex wording in the Individualized Education Plans given to the 15% of public school students with a disability. Helping families understand the IEP is a first step in enabling them to advocate on behalf of their students.

Generative AI chatbots can also provide instant answers to common queries 24/7. Mass General Brigham is testing the use of a chatbot to provide quick, conversational answers to health questions from doctors, drawing on a vast database of medical research articles. The Commonwealth of Massachusetts “Ask MA” chatbot allows residents to type and get answers to their questions about government services around the clock.

AI can analyze large data sets to identify biases and inefficiencies in systems, promoting better governance. In communities plagued by transit issues, urban planners traditionally used intermittent surveys. Now, AI can combine data from traffic cameras, ticketing systems, and GPS to detect disparities in transport resources between rich and poor neighborhoods and enhance urban planning.

We need people to use these tools to know what’s possible and to be able to know how to make guidelines and policies for others.

The Silence on Democracy is Deafening

Where the executive order is regrettably silent is any mandate to study and promote how AI can help advance — and mitigate the risks to — democracy. AI could make it easier for governments to listen to their citizens. Instead of voluminous comments that no one has time to read, generative AI could make it easier to categorize and summarize citizen input. At Massachusetts Institute of Technology Professor Deb Roy uses AI to create a “digital hearth” that analyzes and extracts learning from resident conversations. In 2022, the City of Cambridge used Roy’s Cortico technology to run a series of issue-based community conversations designed to get resident feedback on the choice of the next City Manager.

Our students in AI For Impact are working with Citizens Foundation in Iceland and the Museum of Science in Boston to launch a national conversation on literacy and equity that will launch next month. AI is making it possible to run that dialogue efficiently and effectively.

UrbanistAI, a Finnish-Italian initiative, is using AI to turn the public’s ideas for how their city should be designed into hyper-realistic photographs that communities can discuss. In Helsinki, the technology is helping residents and city officials design car-free streets together. Using AI prompts, participants visualize changes like adding planters or converting roads into pedestrian zones. The technology even incorporates a voting feature, allowing community members to weigh in on each other’s designs. Now you don’t need a degree in urban planning or artistic skills to see how your ideas could transform your community.

Paving the Way Forward

While it’s crucial to be wary of AI’s risks, it’s equally important to embrace its positive capabilities.

We need investment in research and development as well as policy promoting the use of AI for Good, including AI to strengthen democracy.

As the federal government moves forward to create policy on how federal public servants use AI, it would do well to learn from Boston to ensure that we resist fear-mongering in favor of approaches geared towards learning how to use these powerful new technologies for public good.

--

--