Artificial intelligence, or AI, has quickly captured the world's fascination. AI technology has helped drive tremendous innovations in science, medicine, research & development, engineering and the arts, among many other fields. It has the potential to transform economies and societies for generations to come. But the downside to this groundbreaking innovation is a growing fear of distrust and skepticism of a lack of regulation and the potential abuses that AI may bring.
According to a global survey by consulting firm Edelman, just 35% of Americans have trust in AI-related companies. This is down from 43% in 2023. In 2019, the level of trust was reported at 50%. Americans are much more skeptical of AI than the rest of the world. Among the respondents from 28 nations included in their survey, Edelman notes the global average of trust towards AI is at 54%.
Understandably, the rapid rise in the use of AI has created unease among the American population. There have been a number of high-profile incidents where AI has been used to alter photographs, create a biased outcome or to spread disinformation.
Another concern is that even if AI is free from intentional bias or manipulation, its output can be incomplete or simply inaccurate. In 2023, a New York attorney used AI tool ChatGPT to help him research case law for his client’s personal injury claim. Unfortunately, the AI-created legal brief the attorney submitted to the court referenced six court decisions that simply didn’t exist. Within the legal brief, ChatGPT also included false names and docket numbers and used made-up citations and quotes.
At the heart of this distrust is a lack of confidence in effective regulation and control. According to survey results from MITRE-Harris, just 39% of Americans believe AI is safe and secure. This is down nine points from their November 2023 survey. Moreover, 82% say they are either somewhat or very concerned about AI being used for malicious intent. Some of the top concerns noted by survey respondents were that AI could be used for cyberattacks (80%), identity theft (78%), sale of personal data (76%), lack of accountability for those using AI (76%) and deceptive political ads (70%).
By overwhelming consensus, Americans want accountability. In fact, 85% of Americans believe AI technologies should be regulated to ensure adequate consumer protections. Douglas Robbins, Vice President of Engineering & Prototyping at MITRE stated, “The deep concerns that U.S. adults are expressing about AI are understandable. While the public has started to benefit from new AI capabilities such as ChatGPT, we’ve all watched as chatbots have spread political disinformation and shared dangerous medical advice. And we’ve seen the government announce an investigation into a leading company’s data collection practices.”
For Americans to fully embrace AI requires trust. But according to the latest surveys, that trust just isn’t there yet.
Mark M. Grywacheski, Investment Advisor
Quad Cities Investment Group is a Registered Investment Adviser.
This material is solely for informational purposes. Advisory services are only offered to clients or prospective clients where Quad Cities Investment Group and its representatives are properly licensed or exempt from licensure. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital. No advice may be rendered by Quad Cities Investment Group unless a client service agreement is in place.