AI Chatbots and Their Impact on Election-Related Information: A Joint Study by DAI-Africa and Democracy Reporting International (DRI)

AI chatbots offer quick access to information but also risk spreading misinformation, intentionally or unintentionally. Ensuring accuracy is crucial, especially in sensitive contexts like elections, where false information can erode trust, suppress voter turnout, and heighten political tensions.

AI Project

Extended Abstract

AI chatbots have become a powerful tool for providing quick access to information. While they have the potential to democratize information dissemination, they also carry the risk of spreading misinformation and disinformation—whether unintentionally or deliberately. The accuracy of AI responses is especially critical in contexts like elections, where the dissemination of incorrect or misleading information can undermine public trust, suppress voter turnout, or fuel political tensions.

In this context, DAI-Africa is collaborating with Democracy Reporting International (DRI), an international NGO based in Germany with a global footprint, to investigate the role of AI chatbots in the distribution of election-related information. Our joint study focuses on evaluating the performance of four AI chatbots—Claude, ChatGPT 4.0, Copilot, and Gemini—by testing them with eleven election-related questions ahead of Ghana’s December 7, 2024 General Elections. These questions were posed in both English and the four most widely spoken Ghanaian languages—Akan, Mole-Dagbani, Ewe, and Ga-Adangbe—representing 85.4% of the population. The questions cover various aspects of the electoral process, including voter registration, voting procedures, and the declaration of results.

In multilingual societies like Ghana, AI chatbots have the potential to bridge communication gaps, fostering inclusivity and accessibility. However, our preliminary findings highlight significant challenges. The AI chatbots showed language processing gaps in local dialects, often providing incoherent, irrelevant, or incomplete responses. Furthermore, there was Inconsistent Contextual Understanding, with chatbots frequently offering out-of-context answers, particularly on complex issues such as electoral laws and procedures. Additionally, the chatbots demonstrated an Inconsistent Use of External Resources, such as links to official documents or websites, to support their answers. In many cases, they relied heavily on generalized responses, some of which were entirely unrelated to the questions posed.

We look forward to sharing a detailed report of our findings with all stakeholders by the end of the first quarter of 2025. This research aims to improve understanding of the strengths and weaknesses of AI chatbots in election-related contexts and to inform future efforts in ensuring accurate, inclusive, and contextually appropriate information dissemination.