Confidence in science has become increasingly fragmented in the United States across our polarized political spectrum. And overall, the American public is showing declining trust in experts and institutions, including scientists and universities, especially with people “doing their own research” on issues from COVID-19 to climate change. Such self-guided research typically amounts to seeking out and cherry-picking information or viewpoints that support preconceived opinions. Could artificial intelligence (AI) provide a pathway to change these trends?
Scientists have used AI and machine learning tools for decades to help analyze data and answer research questions. What’s new in the past few years is the rapid advance of large language model (LLM)–based chatbot systems that can generate responses to natural language prompts and carry on conversations with impressive sophistication.
Today, AI increasingly can answer scientific questions as well as credentialed experts can. As AI capabilities continue to improve, it’s getting harder to tell whether a human or a chatbot is answering your question. OpenAI recently reported that in the few months between the public releases of its GPT-3.5 and GPT-4 AI models in November 2022 and March 2023, respectively, the system improved its ability to answer AP Physics 2 exam questions accurately from 25% to 70%. And with OpenAI claiming the fastest growing tech userbase of all time, the public has demonstrated its eagerness to use these capabilities.
In a future where AI has Ph.D.-level command of scientific knowledge and reasoning, everyone can access this expertise. Although there is dramatic potential for spreading misinformation, the democratization of expertise through generative AI also has great potential to improve people’s understanding of science and ease strain between the scientific enterprise and the public it serves.
Chatbots as Expert Interlocutors
Consider the ways in which AI could supplement climate change communication. Historically, expertise in climate science has been siloed in research institutions that are accessible to relatively few people. In contrast, chatbots can give virtually anyone unlimited access to a digital expert to answer their questions—a feat that never will be possible with human expertise.
To be sure, broadening access to evidence and expertise is not a panacea for fixing the problems of trust in science and dispelling unscientific beliefs. Information is not enough. In many cases, distrust or skepticism about scientific issues or consensus stems not from a lack of knowledge, but from the perception that the kinds of actions most aligned with the evidence (e.g., transitioning from fossil fuels to clean energy) conflict with one’s cultural identity or values.
In such cases, discourse can help. Reflective discussion can pull people back from polarized extremes. And early evidence from studies of the latest generation of chatbots has shown that exposure to AI agents representing different policy views can meaningfully shift users’ perspectives. Users also may be willing to ask chatbots questions that they are too uncertain of or insecure to ask a human expert or to ask a chatbot questions that clarify information they’ve heard from human communicators.
In addition, a chatbot may be better equipped to talk to someone at the level where they are in their current understanding and thinking about climate change. For inspiration, we may look to other fields, such as medicine, where complex and technical concepts must routinely be communicated to broad and diverse audiences.
For example, a doctor or other medical professional may need to explain a complicated medical scenario to a colleague one minute and to a patient with no medical training the next. The doctor must incorporate and answer questions and be responsive to feedback from each of them. Researchers are actively considering using the current generation of AI tools to fulfill these communication tasks in the near term.
Whereas human communicators may be pressed for time or attempting to engage with a number of individuals at once, there is no time limit on engagement with a chatbot. Climate experts may also struggle to explain concepts as a result of the “curse of knowledge”—that is, they understand something so well they forget that others do not and thus neglect to share foundational information with their audience. A chatbot, on the other hand, can tailor its responses to, and share the most relevant information for, a single user, including not just what is known but also how it is known.
Like any good human communicator, for example, an AI chatbot can report that climate disruption can increase hurricane risks, explain how attribution science works, and even offer links to specific research results relevant to a user’s home region. Chatbots such as YouChat, Perplexity, and Microsoft’s Bing Chat (based on OpenAI’s GPT-4 model) have been designed with dedicated abilities to cite real sources.
Further, whereas people, as invested and emotional beings, may get impatient or frustrated when communicating about an issue like climate change with someone who is skeptical or has a different perspective, AI can be programmatically instructed to maintain an even tone.
Context, Interpretation, and Application
Polling has suggested that most Americans are skeptical of using chatbots in sensitive contexts, such as for making medical decisions, but we also know that many Americans are skeptical of human scientists and other human climate communicators. If they are built in a trustworthy fashion, AI chatbots could allow users to interrogate evidence for climate change in a way that feels more objective to them than talking to humans they may perceive as biased.
A specialized AI model could talk a user through how to interpret a highly complex geospatial-temporal climate model, making it more accessible than a scientific paper or even an interactive data visualization tool could.
Interacting with a chatbot also could allow someone to ask questions about what climate change will mean for their local community and get rapid, personalized, and accurate answers with contextualization that not even renowned experts could provide instantly. OpenAI’s GPT-4 model, a general-purpose LLM system with no specialized training in climate change communication, has impressive abilities to answer questions about data or even generate new graphs, with poetic flair.
Recent demonstrations have shown how GPT-4 can generate data visualizations from tables provided by users, iteratively improving them in response to user questions. As a flex, GPT-4 will even explain scientific concepts in Shakespearean rhyme, an example that demonstrates the technology’s potential to tailor information to individual needs and communication styles.
Chatbots also may enable people to use their scientific knowledge—gained from interacting with AI or elsewhere—to benefit themselves and their communities. For example, polls have shown that a large portion of the American population is concerned about climate change, but very few act on their concern by doing things like contacting elected officials or even talking to their friends and family about the issue.
One commonly cited reason for inaction is that it is challenging to know what can be done. A chatbot could provide many ideas for meaningful action, even customizing suggestions to an individual’s interests, skills, and resources. If someone wants to write to an elected official or submit a public comment on proposed legislation, a chatbot could suggest key points capturing their point of view and even help them to articulate those points. If they want to join an advocacy group, it could identify local options.
For all their remarkable potential, we don’t expect or want chatbots to replace human communicators entirely. People will remain essential messengers for many circumstances, audiences, and messages.
For example, people can introduce the topic of climate change in contexts where it otherwise might not be salient. Think of how meteorologists who report on climate change in local news effectively increase their audiences’ understanding of local climate impacts. In addition, climate communicators have opportunities to engage with different publics in a wide range of settings, from schools and state fairs to local and national newspapers, initiating conversations where a chatbot could not. Humans also may be better at that initiation because they can connect over shared interests and values and demonstrate empathy.
To remain up-to-date and effective, chatbots also will require continuous high-quality inputs of information, which human communicators will be uniquely suited to provide. Even as AI assistive tools are already starting to make climate research more efficient, the research itself will continue to be led by humans.
And, of course, it is humans who must work to keep debugging AI tools. The latest AI chatbots have many problems left to solve. They sometimes “hallucinate” nonfactual answers and even cite fake sources. The current class of language models will gladly generate a web link or a paper DOI (digital object identifier) that doesn’t really exist because it thinks that that fragment of text will satisfy a human user’s expectations. Chatbots sound confident, even when they shouldn’t be. As a result, they have the counterproductive potential to both generate and perpetuate misinformation.
What’s more, chatbots, powered as they are by algorithms with billions of parameters and huge quantities of data, use energy and hardware resources at prodigious rates and thus carry their own climate and environmental costs. And we face the specter of political polarization among AI systems that mirrors polarization among humans—these remain tools built by and trained on text generated by people, after all.
These issues must be addressed before AI can be used confidently to support public engagement with and trust in scientific evidence.
What’s needed next is experimentation. We must test these strategies for public engagement, education, and activation with new generations of AI chatbots. And we need to develop and experiment with custom-trained LLMs and AI applications that have specialized capabilities in domains like climate.
Surely the giants of the field, such as OpenAI, Microsoft, and Google, will develop tools catering to the scientific and education markets, but we need not wait for or rely on them. Rapid innovation among the open-source community—with, yes, help from big tech—has made it possible for small research groups, companies, and agencies to build and customize their own chatbots and integrate LLMs into their own applications. Climate scientists and AI developers and researchers should look for opportunities to collaborate with each other to design and test the effectiveness and reliability of LLMs and other AI tools with climate communication in mind. Government should play a role here, too; there should be a “public option” for AI.
Despite the hurdles to overcome in developing well-trained and trustworthy chatbots, the urgency with which we must address climate change requires that we consider opportunities to work alongside AI in ways that will engage as much of the public as possible in pursuing solutions as quickly as possible. More broadly, as AI assistive tools improve in their understanding of science, perhaps they can help all of us do the same. If used appropriately, ethically, and sustainably, these tools could usher in a shift in how diverse publics engage with science topics like climate change, medicine, and more and, in turn, how they form opinions, build trust, and incorporate scientific information into their own decisionmaking.
Nathan E. Sanders (email@example.com), Berkman Klein Center for Internet & Society, Harvard University, Cambridge, Mass.; and Rose Hendricks, Association of Science and Technology Centers, Washington, D.C.