
The Artificial Intelligence (AI) dilemma facing Uganda’s newsrooms ahead of 2026 elections
COVER STORY | RONALD MUSOKE | As Uganda prepares for the 2026 general elections, the country’s journalism industry, upon which millions of voters depend to vote, finds itself at a crossroads, according to the Uganda chapter of the Africa Freedom of Information Centre (AFIC), a non-profit that promotes the right to access to information, transparency and accountability across Africa.
At a symposium held in Kampala on July 23 to discuss the opportunities and challenges of using AI in Ugandan newsrooms, participants comprising journalists, academicians, civil society actors, and policymakers heard how Artificial Intelligence (AI) is increasingly shaping the country’s media landscape.
The consensus was that technology is not only offering powerful tools to enhance news reporting but it is also introducing significant threats to press freedom, data privacy, and the integrity of information thanks to its susceptibility to be abused by purveyors of disinformation, a vice which could shape public perception and sway electoral outcomes.
Many Ugandan newsrooms are gradually adopting AI tools such as transcription software, translation systems, and data-mining applications. These technologies promise efficiency and scale, particularly in investigative journalism.
“Imagine being able to sift through thousands of court documents or government reports in minutes,” said Raymond Amumpaire, an AI and technology researcher who spent time delving into the policy brief titled: ‘Artificial Intelligence and Journalism in Uganda: Opportunities, Challenges, and Policy Pathways.’
He mentioned tools such as Google’s Pinpoint that are already making this possible, helping reporters uncover corruption and hidden links that would otherwise take weeks.” Amumpaire said AI is helping journalists track and counter disinformation through network analysis, identifying coordinated online behaviour, bot (fake) accounts, and inauthentic activity while translation tools are aiding in overcoming language barriers, helping media outlets to include marginalized voices from Uganda’s diverse linguistic communities.
He noted that for many under-resourced newsrooms, especially those outside major urban centres, AI offers a solution to staffing and time constraints as automated transcriptions, real-time subtitling, and content summarization allow journalists to focus on deeper reporting.

Risks and realities
But AI’s pitfalls are no less formidable, according to the experts at the symposium. For instance, “Generative AI” is increasingly being used to create realistic fake images, videos, and audio; enabling the spread of political propaganda and false narratives. And, in an election year like the upcoming 2026 general elections, this could have devastating consequences.
A vivid example was shared during the symposium by Gilbert Sendugwa, the Executive Director of AFIC. In the just concluded primaries for the ruling party, the National Resistance Movement (NRM), a voice of a prominent NRM party member from western Uganda made rounds on social media.
In the voice note, the senior NRM member was heard endorsing an erstwhile opponent for the NRM party card. Sendugwa said some media houses in the western region fell for the prank and went ahead to air the story. The alleged endorser of the opponent had to come out the following day on a radio station clarifying that he has never done so.
“So as we go to into these elections, more other things are likely to happen and a number of them would actually incite and result into significant breach of peace,” Sendugwa said. “All of us therefore need to make every effort to ensure that AI is used for good but also work together to raise awareness on the potential of the risks and the need to fact check.”
2026 election fever
A fortnight ago, the Electoral Commission revised the roadmap for the 2026 general election, setting new nomination and campaign dates for presidential, parliamentary, and local government races. Voting will take place between January 12 and February 9, 2026. In the updated schedule, nominations for presidential candidates will be held on Sept. 23-24, 2025, earlier than the initially planned Oct. 2-3. Parliamentary nominations will be conducted on Sept.16-17, while those for local government positions including Special Interest Groups (SIGs) will take place between Sept. 3-12.
Justice Simon Byabakama, the Electoral Commission Chairperson, said the changes are meant to give both candidates and the Commission more time to prepare ahead of what is expected to be a highly competitive election. Campaigns for presidential candidates will officially run from Oct. 4, 2025, until Jan. 12, 2026. Parliamentary campaigns will begin earlier, from Sept. 23, while local government campaigns are scheduled to start on Sept. 13. All campaigns will conclude on Jan. 12, 2026.
The polling period, covering presidential, parliamentary, and all levels of local government councils including city, district, municipal, and sub-county levels will stretch from January 12 to February 9, 2026, in accordance with Article 61(2) of the Constitution. Byabakama noted that the revised dates are part of broader efforts to ensure adequate time for logistics, voter education, and candidate engagement. However, the move has drawn criticism from sections of the opposition, who argue that the changes could affect their mobilization timelines and campaign strategies.
Algorithmic bias pitfalls
But, as the political campaigns draw closer complete with heightened rhetoric on both traditional and social media platforms, Amumpaire cautions Ugandan newsroom managers to be aware that AI systems – often trained on datasets from outside Uganda – may reproduce biases embedded in those data sources. This, he says, can skew news distribution, prioritize sensational content, or marginalize certain voices.
“If these systems are trained on biased data, they will reflect and reinforce that bias – and this undermines the journalist’s role as a fair and balanced informer.” He noted that most AI systems are trained on massive datasets that may include copyrighted journalistic work. The unauthorized use of this content, he said, raises intellectual property concerns, with major lawsuits already underway globally.
Amumpaire also noted that since AI systems often operate as “black boxes,” this makes it difficult to explain how they generate responses or content. This lack of transparency challenges journalistic integrity and legal accountability, he said.
To make the situation even worse amid the growing influence of AI, Uganda still lacks a comprehensive national AI policy. Existing legislation – such as the Uganda Communications Act 2013, the Data Protection and Privacy Act, 2019, and the Computer Misuse Act 2022 – either predates AI’s emergence or offers vague protections.
“There are massive legal gaps,” said Anthony Masake, the Executive Director of Chapter Four, a Kampala-based civil liberties non-profit. “For instance, under the Computer Misuse Act, journalists can be prosecuted for publishing material deemed to ‘distort facts’ or ‘incite public insecurity’ — both of which are vague and easily abused.”
Uganda’s data protection laws also require journalists to work with complete, accurate, and consensually obtained data. But by design, AI tools scrape massive amounts of online content — often without such safeguards. “From where I sit as a lawyer, I can see the level of exposure journalists face when using AI,” said Masake. “The Uganda Communications Act, which sets minimum broadcasting standards, is especially problematic.”

He explained that vague provisions on live coverage, public morality, and promoting violence could expose journalists to legal risk. “AI allows you to generate quick headlines and real-time content. But it doesn’t assess whether that content meets standards of public morality or whether it might promote prejudice,” Masake said.
He added: “This is why, during previous elections, some media producers who aired live coverage of police brutality were targeted — even though they were simply showing what was happening.” Masake stressed that while AI enables journalists to report in near real-time, it also increases their legal vulnerability.
He warned that AI-generated content could easily distort facts if not verified, putting journalists at risk of prosecution under ambiguous legal provisions. “There’s a risk that the Uganda Communications Commission (UCC) could come after a journalist or media house for running a story they consider inaccurate — based on their own interpretation of facts,” Masake said.
Data privacy and consent
Masake also raised concerns about data privacy under Uganda’s Data Protection Act. “Every time you prompt AI, it scrapes the internet. Some of the data it retrieves may include personal information,” he explained. “You might use it in a story, unknowingly violating consent requirements. If a data subject complains to the Personal Data Protection Office (PDPO), you may not be able to respond adequately.”
He also stressed the importance of using complete and updated data. “If I divorced someone six months ago, and AI pulls outdated information showing we’re still married — and you publish that — it could be misleading and legally problematic.”
Similarly, AI might attribute to someone a title or role they no longer hold. Without verification, this can lead to inaccuracies and potential liability. Masake emphasized that data must be used in accordance with its intended purpose. “If information is collected during this meeting, it should be processed within the context of this meeting. Using it for another purpose creates exposure.”
Regarding the Computer Misuse Act, Masake warned that vague provisions including the definition of “hate speech” could be used to target journalists using AI-generated content. “The amendment defines hate speech to include text that ridicules or demeans someone. That’s incredibly broad,” he said.
Brenda Namata of Pollicy, a Kampala-based feminist collective of technologists, data scientists, creatives and academics working at the intersection of data, design and technology, urged editors and media leaders to take the initiative to deliberately train journalists and formulate newsroom policies.
“I cannot stress enough to the editors in the room that for us to have change as far as ethical use of AI in the newsroom is concerned, there’s need of having deliberate actions with building the capacity of journalists; that journalists are acting from a point of self-knowledge as opposed to expected knowledge within the newsroom,” she said. “It’ is very important that editors take the initiative that we embed deliberate training of the newsroom on how to apply AI.”
AI is a double-edged sword
For Mercy Owilla Abiro, the Programmes and Partnerships Lead at AFIC, AI is a double-edged sword. “It can make or break newsrooms; we need to adapt on how we make it useful in our news reporting but also, we should not lose our agency to it. We should not be intellectually lazy but work with it to better our stories,” she told The Independent.
Dr. Ivan Lukanda (PhD), a journalism lecturer at Makerere University, observed that academic institutions are equally unprepared for the AI shift. “All of us are struggling with AI because AI is emerging in each and every sphere and we’ve had to learn on the job in order to stay relevant,” he said.
Dr. Lukanda said Makerere University is coming up with a policy on using AI. “AI is forcing universities to rethink assignments and assessments. We are developing new policies because students are increasingly relying on AI tools, often unknowingly crossing ethical lines,” he said.
“We know that we are dealing with a generation that is tempted to use AI all the time, so, we try to balance. Yes, some assignments have to be done using computers but we also do in-class assignments to ensure that we tap into the students’ talent. Those who are talented should be able to benefit from their talent, and we should also be able to nurture that talent.”
Adaptation, not resistance
The Uganda Communications Commission (UCC), which is part of the national AI task force, stressed that while regulation is essential, adaptation is just as important. “We’re dealing with a fast-evolving tool — not a threat to be eliminated,” said Meddy Kaggwa Ssebaggala, UCC’s Head of Multimedia Content. “What matters is how we choose to use it. The legal system must evolve, yes, but users, journalists included, must also learn and adapt.”
Kaggwa also pointed out that many laws are drafted without technical expertise. “Parliamentarians and drafters don’t always understand the implications of AI. That’s why journalists must be involved early to shape laws that protect, not punish them.”
Gilbert Sendugwa of AFIC closed the symposium urging participants to join the advocacy efforts for reforming of the laws, policies and regulations so that they be facilitative and not constraining to journalism practice in the country.