When machines take over: We need to understand AI better to control it

June 16, 2023 | 03:20 pm PT
Nguyen Xuan Phong AI expert
During a career orientation seminar for students in Singapore earlier this year, I received many questions about AI's application in and impact on various industries.

In addition to the excitement about the potential this technology has, many young people are also concerned about the negative impact on their life and work in future.

Ten years ago it would have been difficult for us to imagine a tool that surpassed human capabilities and broke unimaginable limits such as drawing a picture or summarizing a 500-page book in seconds, or answering and consulting millions of customer care calls every month.

It is an undeniable fact that AI is a great tool that helps reduce costs and increase labor productivity. AI is easy to use, with no significant training required.

Google already uses AI to predict floods and other extreme weather patterns caused by climate change. The tool developed by Google once sent 115 million flood warning notifications to 23 million people via Google Search and Maps, saving many lives.

The value of AI, depending on the ability of humans to exploit it, will be endless.

But people also realized the danger of AI almost immediately.

AI can make life better but at the same time it can also turn humanity in a negative direction, quickly and on a massive scale.

We used to need some photo editing skills to create fake photos. Today, AI helps create fake photos with only suggestive descriptions. Fake photos can be used to alter facts, defraud, and, more dangerously, manipulate people for political or economic purposes.

Recently a fake photo of the Pentagon on fire immediately caused the U.S. stock exchange to drop. The market only recovered after the photo was confirmed to be fake.

In the military, AI can be integrated into drones, posing a potential danger since one person can control thousands of automatic weapons at the same time.

Professor Yoshua Bengio, one of the world's leading AI scientists, worries that AI could be used to develop new chemical weapons. He advised against giving AI powers to the military.

"It might be military, it might be terrorists, it might be somebody very angry, psychotic," he told the U.K.’s state-owned BBC network.

"If it's easy to program these AI systems to ask them to do something very bad, this could be very dangerous. If they're smarter than us, then it's hard for us to stop these systems or to prevent damage."

In addition to being used as a tool to increase automation and amplify productivity, AI is also capable of self-learning and getting smarter. Recent advances show that the future of super-intelligent AIs is also closer than we thought just a year ago.

AlphaFold, an AI system developed by DeepMind, can predict the 3D structure of more than 200 million proteins from just a protein 1D amino acid sequence.

This is considered a big step towards human understanding of biology. Technology will help humanity create new drugs, but it can also suggest poison as a weapon to destroy life on a large scale.

If AI can make its own decisions and be smarter than most humans, the risks it poses will be unpredictable.

There can be danger when AI's goals do not coincide with human goals. If super-intelligent AIs are created too quickly while we are not ready to align their goals with ours, we will lose control.

Another problem with AI is the difference in the goal of using AI between organizations and individuals.

This problem can be solved by establishing clear regulations and controlling the use of AI. These regulations clearly define what AI is allowed to do, ensuring transparency and accountability in the use of artificial intelligence.

For example, the government may require information on whether content is written by AI or by people, and images generated by AI or drawn by people.

By following responsible AI principles, organizations and individuals can harness and maximize the positive impacts of AI while minimizing potential risks.

Care must of course go hand in hand with ongoing research into AI. The better we understand AI, the better we can control risks.

*Nguyen Xuan Phong is a data scientist. He obtained a doctoral degree with an applicable AI thesis at the University of Tokyo while working for Hitachi. He returned to Vietnam in 2018 and two years later, connected MILA - Quebec AI Institute and FPT Corporation to set up the FPT.AI research center in Quy Nhon in central Vietnam.

The opinions expressed here are personal and do not necessarily match VnExpress's viewpoints. Send your opinions here.
 
 
go to top