Scientists have discovered that artificial intelligence becomes a more effective debate partner and reaches more accurate conclusions when allowed to mimic the messy, real-time dynamics of human conversation, including interruptions and silence.
Researchers from the University of Electro-Communications in Japan proposed a framework where large language models (LLMs) are not bound by rigid, turn-based communication. Instead, they were assigned personalities based on classical psychology traits, enabling them to speak out of turn, cut off other speakers, or remain silent.
The team reprogrammed LLMs to process responses sentence by sentence and tested three conversational settings: fixed speaking order, dynamic order, and dynamic order with interruption enabled. The latter used an "urgency score" allowing the AI to interrupt when it spotted a critical error or point.
When evaluated on 1,000 questions from the Massive Multitask Language Understanding benchmark, accuracy improved significantly. With one initially incorrect agent, accuracy rose from 68.7% (fixed order) to 79.2% (interruption allowed). In a harder scenario with two incorrect agents, accuracy increased from 37.2% to 49.5%.
The researchers believe these findings suggest that discussions shaped by personality and necessary interruptions can produce better outcomes than strictly polite, turn-based exchanges, paving the way for more effective AI collaboration in creative and decision-making domains.