Thursday , November 7 2024

Africa snubbed at global AI safety summit

U.K. Prime Minister Rishi Sunak at the AI Safety Summit (PHOTO/PORTLAND PRESS HERALD)

Experts cite asymmetrical advancement in use and purpose of AI technologies between the developed and developing countries

Kampala, Uganda | IAN KATUSIIME | War and combat capabilities of Artificial Intelligence (AI) technology were at the back of some people’s minds at the just concluded global summit on safety in the UK held from Nov.1-2.

Twenty seven countries, including China and the U.S. signed an agreement on ensuring safe use of AI known as the “Bletchley Declaration” aimed at combating the risks of the technology. Bletchley Park was the base of code breakers during the Second World War.

The Bletchley Agreement will ensure that government that signed the agreement would be able to test the AI models of eight leading tech companies before they are released.

The American magazine Politico reported that the eight companies are Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI and that they had agreed to “deepen” the access already given to the UK’s Frontier AI Taskforce.

“Until now the only people testing the safety of new AI models have been the very companies developing it. That must change,” declared Rishi Sunak, the U.K. Prime Minister who convened the summit.

On Oct. 30, two days to the AI summit, U.S. President Joe Biden issued an Executive Order on the “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence”. Biden’s Order detailed what many experts have long warned about AI such as its potential for racial discrimination, disinformation, fraud and the risks it poses to national security.

Biden emphasised safeguarding of the new technology in its use “Meeting this goal requires robust, reliable, repeatable, and standardised evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.”

The summit was a convening point for world leaders, tech titans, AI firms, non-profits and business organisations from the world over. It was the AI safety summit as the world looks for ways of regulating the rapidly devolving technology and mitigating its risks.

The U.S. Vice president Kamala Harris, China’s tech vice minister Wu Zhaohui, UN Secretary General Antonio Gueterres, and ther world leaders attended. Billionaire Elon Musk who owns social media site X, and has been vocal about the regulation of AI attended and spoke at the event. Sunak defended the invitation of China after grumbles from some voices in the UK including his predecessor Lizz Truss arguing that China was undoubtedly one of the world’s leaders of AI.

African leaders and tech firms from the continent were not invited to the event. Part of the reason for the snub could be because of the asymmetrical advancement in use and purpose of AI technologies between the developed and developing countries.

Speaking about the Uganda experience, Michael Niyetegeka, a technologist and Executive Director of Refractory, a tech skilling academy in Uganda, said AI use in the country is still at an early stage.

“Overall the use of AI in Uganda is still in the nascent stages with a lot of work largely in the research stage or pilot phase,” Niyetegeka said in an email response to The Independent.

He said, however, “Significant work has been in the diagnostics space”.

He said the AI lab at the College of Computing and Information Sciences at Makerere University has been at the forefront of developing the AI tools in Uganda.

AI projects in Uganda

While the focus at the Summit was on the combat capabilities of AI technology, most of the projects at the AI Lab at Makerere are about social-economic uses.

Some of the projects the AI lab is working on include use of AI and data systems for targeted surveillance and management of Covid19 and future pandemics affecting Uganda, using machine learning for prediction of deforestation, using machine learning to provide advisory services to small holder farmers in Uganda, building text and speech based datasets for low resourced languages in East Africa.

Other projects at the AI lab include using datasets for AI based diagnosis of malaria. The project will “provide accessible, large and quality geo-tagged datasets of microscopy thick and thin blood smear images from Uganda and Ghana for improved field-based diagnosis of malaria.”

Advertisement

Niyitegeka says the major gap is in access to funds to support the scale of the use of AI tools that have passed the prototype stage.

“Otherwise these remain as lab products and as a country we never get to appreciate the real value of these tools. It is important to have special purpose funds to support these initiatives to scale, it is only then that application and potential business value is realized when we deploy in other markets,” he adds.

Most projects in AI Lab at Makerere University are being run by Ugandans but are funded by heavy hitters in the world of AI like Google and Facebook. The Ugandan Ministry of ICT has a partnership with SunbirdAi, an entity that champions the application of AI in solving some of societal challenges.

Niyitegeka says the lack of talent to support the proper integration is a major challenge. This is more so for the corporate business, where most of the enterprise solutions have AI embedded but need to align these tools to deliver value for the businesses.

As a result, Refractory, a pioneer training academy that provides skilling for the global tech industry which Niyitegeka heads, is implementing the pilot Machine Learning/Artificial Intelligence training program with support from German Federal Ministry for Economic Cooperation and Development (BMZ), the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) which implements the project “FAIR Forward – Artificial Intelligence for All”, that strives for a more open, inclusive and sustainable approach to AI on an international level.

“The first cohort started in October 2023 and has over 100 participants who will graduate in April 2024. We envisage to have more cohorts in the coming years. This should contribute to the talent pool of ML/AI experts in the market,” he told The Independent.

AI in warfare

The summit was timely as the world is already witnessing the ugly side of AI. The Russia-Ukraine war has been cited as the first major conflict where both sides have used AI-powered weapons. Ukraine has extensively used the Turkish-made TB-2 drones to devastating effect while Russia has employed the Shahed136 drones made by Iran. The death toll of the war has reportedly surpassed 500,000.

The conflict in Gaza where Israel with its unmanned aircraft continuously bombards settlements, hospitals, and killing innocent civilians including children has demonstrated the danger of unregulated technology. The bombing of Gaza is now being talked about in genocidal terms. Israel is a tech haven and its reputation in surveillance technology is well-known.

With the rise of killer robots, the fears about automated weapons are even more pronounced. In a special report on killer robots, Reuters sounded the warning.

“The capacity of AI systems to analyze surveillance imagery, medical records, social media behavior and even online shopping habits will allow for what technologists call “micro-targeting” – attacks with drones or precision weapons on key combatants or commanders, even if they are nowhere near the front lines,” it said.

The report added “AI could also be used to target non-combatants. Scientists have warned that swarms of small, lethal drones could target big groups of people, such as the entire population of military-aged males from a certain town, region or ethnic group.”

However analysts have also said AI is improving the defence capabilities of weaker nations like Ukraine and Taiwan facing stronger adversaries like Russia and for the latter which faces an ever present threat of invasion from China. Taiwan has drawn inspiration from Ukraine’s use of drones, which have en heralded as the ultimate asymmetric weapon, and is building asymmetric capabilities.

Defence analysts have also talked about the advantage created by AI weapons– where a platoon with automated weapons will have the combat power of a battalion– and a battalion with similar weapons will have the combat power of a brigade. In addition, the reduced casualty has also been cited as a factor. A drone carrying out surveillance in an enclave 25km from its base will relay the images in real time even if it gets shot down compared to a human carrying out reconnaissance, who if they are killed by enemy forces, a life will be lost and so will the intel.

The rise of AI in warfare has also led to the emergence of companies dealing in autonomous weapons like Anduril, a U.S. company that was formed to “radically transform the defense capabilities of the United States and its allies by fusing artificial intelligence with the latest hardware developments.” Anduril is manufacturing Ghost Shark, an AI-powered submarine for the Australian Navy, according to the Reuters report.

Other companies making waves in the field include Shield AI; a defense technology company whose mission is to protect service members and civilians with intelligent systems.

“We’re building the world’s best AI pilot to ensure air superiority and deter conflict because we believe the greatest victory requires no war, said Brandon Tseng, Shield AI’s President, co-founder, and former Navy SEAL on Oct. 31.

Leave a Reply

Your email address will not be published. Required fields are marked *