Sydney, Australia | Xinhua | A global study released on Wednesday has found that three out of five people, about 61 percent, are wary about trusting artificial intelligence (AI) systems, reporting either ambivalence or an unwillingness to trust.
Researchers from the University of Queensland and KPMG Australia surveyed more than 17,000 people from 17 countries across the world, such as Australia, China, France and the United States, which were regarded by the study as leaders in AI activity and readiness within their region.
According to the study, only 39 percent said they are willing to trust AI systems, while a third of people worldwide reported high acceptance.
Though about 85 percent believed the use of AI will result in a wide range of benefits, including improved efficiency, innovation, reduced costs and better use of resources, 73 percent of the respondents also voiced concern over potential risks from AI use.
Among the nine listed risks, cybersecurity is the dominant concern raised by 84 percent of people, followed by manipulation or harmful use of AI and job loss due to automation.
The study also revealed that 61 percent were worried about the “uncertain and unpredictable” impact of AI on society, with 71 percent of the total respondents believing that AI regulation is required.
Partner in Charge for KPMG Futures James Mabbott said a key challenge is that a third of people have low confidence in government, technology and commercial organizations to develop, use and govern AI in the society’s best interest.
“Organizations can build trust in their use of AI by putting in place mechanisms that demonstrate responsible use, such as regularly monitoring accuracy and reliability, implementing AI codes of conduct, independent AI ethics reviews and certifications, and adhering to emerging international standards,” Mabbott said.