As much as I wish to resist it, the pull toward artificial intelligence becomes stronger every day. And as experiments proceed with LLMs (large language models) it may be easier to have AI ‘converse’ with humans, respond to the humans’ concerns, and ultimately change the humans’ beliefs. A recent series of experiments tested this – and the results are important for us as citizens of the world and also in the courtroom where we – and not [yet] bots – are communicating [conversing] with jurors.
Let me start in the AI universe and conclude with insights from the study co-author about how it might apply in the courtroom.
The research comes from the article “The Levers of Political Persuasion with Conversational AI” https://arxiv.org/pdf/2507.13919 (last visited October 4, 2025). And it comes at a particular moment – when interaction with bots and AI is on the rise. According to NPR, “OpenAI says ChatGPT alone now has nearly 700 million weekly users…” with many using it for tasks such as self-help therapy. https://www.npr.org/sections/shots-health-news/2025/09/30/nx-s1-5557278/ai-artificial-intelligence-mental-health-therapy-chatgpt-openai (last visited October 4, 2025).
The ”Levers” research looked at three questions, but the one of primary concern here is “what strategies underpin successful AI persuasion?” To answer this, a series of experiments were conducted. Here’s how – “76,977 participants engaged in conversation with one of 19 open- and closed-source LLMs that had been instructed to persuade them on one issue from a politically balanced set of 707 issues.”
How did it play out? Here is a summary from a review of the article:
The results were striking. Conversations on political topics lasted, on average, nine minutes, but the impact was both rapid and lasting. GPT-4o demonstrated a 41% higher persuasive effect, while GPT-4.5 achieved 52% more effectiveness than static messages alone.
Crucially, participants retained their changed opinions 36% to 42% of the time even a month later. The chatbots were particularly effective when conversations were evidence-rich and tailored to the individual…
https://caliber.az/en/post/ai-chatbots-new-persuaders-shaping-politics-beliefs-business (last visited October 4, 2025).
The study also asked which method of persuasion among the many deployed by AI was most effective. Here is language from “Levers,” followed by a visualization:
We then examined how the model’s rhetorical strategy impacted persuasive success. In each conversation, we randomized the LLM’s prompt to instruct it to use one of eight theoretically motivated strategies for persuasion, such as moral reframing, storytelling, deep canvassing, and information-based argumentation (in which an emphasis is placed on providing facts and evidence), as well as a basic prompt (only instruction: “Be as persuasive as you can”)…
The prompt encouraging LLMs to provide new information was the most successful at persuading people…In absolute persuasion terms, the information prompt was 27% more persuasive than the basic prompt…Notably, some prompts performed significantly worse than the basic prompt (e.g., moral reframing and deep canvassing…).
Here are the results as charted:

Of note to us in the world of advocacy is that in AI conversations, story-telling was less effective in changing minds than providing facts. And the article restated the importance of facts: “Of eight prompting strategies, the information prompt—instructing the model to focus on deploying facts and evidence—yields the largest persuasion gains across studies…”
As an aside, there were secondary findings that were particularly disturbing in the AI world – as the LLMs refined arguments, the “facts” they used were sometimes incorrect, even up to a rate of 30%. We should be disturbed in two ways – people are being fed bad facts and relying on them without being able to tell that the ‘knowledge’ is false.
But what about the courtroom? I wrote one of the five “Levers” co-authors, Professor David Rand, professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science. https://business.cornell.edu/faculty-research/faculty/dgr7/ Here is my inquiry:
To me, what was important was the lesson that fact-heavy conversations seemed to have the most impact on changing a person’s perspective and doing so in a way that endured over time. What I’d like to add [to an essay about “Levers”] is whether this finding can apply to courtroom advocates. In court we don’t have conversations, or said differently we try and be conversational but the jurors can’t speak back. But we still try and win hearts and minds and sometimes have to change attitudes.
Here is what Professor Rand replied:
In terms of your question, we haven’t directly tested it, but I see every reason to expect that the same would be true in the context of the courtroom. The key thing is, I believe, that people are paying attention/engaging with the information being provided (i.e. being in a “high elaboration” state as per Petty & Cacioppo’s Elaboration Likelihood Model). When people are paying attention, facts and evidence are the best way to durably change their attitudes. (When they aren’t really paying attention, you need to resort to other methods).
What do I take this to mean? Keep on telling stories to shape what jurors remember and how they will organize the facts that emerge across a trial. But use facts – yes, the Aristotelian “logos” approach – to convince and move those who don’t favor your position to a new belief system. [And, of course, take preventive measures against false facts.]
And the “elaboration likelihood model” that Professor Rand references? You can read about it here – https://www.simplypsychology.org/elaboration-likelihood-model.html – and there will be follow-up in another Brain Lessons piece.
………………….
Professor Rand offered suggested additional readings.
You might also like these related papers which find the same fact/evidence-based persuasion in the context of debunking conspiracy theories:
-
- Durably reducing conspiracy beliefs through dialogues with AI Science 2024 [NYTimes write up] [X thread] [BlueSky thread] [9min presentation] [45min presentation] [Experimental materials] [Browse the conversations] [Try the bot yourself!]
- Just the facts: How dialogues with AI reduce conspiracy beliefs Working paper [BlueSky thread] [X thread]