The topics range from fraudulent food advertisement, charity scams, to the Ukraine war and global trade conflicts, and South Korean celebrity scandals.
This chaos reminds me of a controversial media policy change less than two months ago.
Meta, the parent company of Facebook, Instagram and Threads, shut down its fact-checking program. Instead, it introduced "community notes," a system where users can add context or warnings under posts but without needing proof or expertise.
Meta is following the lead of two other tech giants—X (formerly Twitter) in 2023 and YouTube in 2024.
By abandoning direct content moderation, these companies have stopped filtering misinformation at its source. Some commentators now call social media the "Wild West of news."
![]() |
Social media apps on a phone screen. Illustration photo by Pexels/Magnus Mueller |
There are several reasons for this change by these social media companies.
First, misinformation spreads too fast for fact-checking organizations to keep up. Professional fact-checkers, who label false claims and provide explanations, work far slower than the speed at which fake news appears.
If artificial intelligence is weaponized to mass-produce and distribute misinformation, the challenge becomes even greater for the humans.
Second, tech executives and their supporters often question whether these fact-checking groups are truly unbiased. They argue that complete impartiality is impossible, especially on political issues. To them, these groups restrict free speech by controlling free discussions on social media.
Third, misinformation has had serious consequences, especially during Covid-19 and U.S. presidential elections. No company wants to take the blame when things go wrong.
So social media platforms now claim neutrality, leaving users to share information freely while governments decide on regulation.
But media experts push back against these claims.
One, fact-checkers may not stop all misinformation, but they can slow its spread before it causes harm. Their credibility reassures the public and encourages people to verify information for themselves.
Two, studies show fact-checkers' accuracy is about the same as that of politically unaffiliated individuals. While absolute neutrality may be unrealistic, by upholding professional ethics, these checkers help build public awareness and improve media literacy.
Three, tech companies make billions from online engagement but avoid accountability. They cut funding for independent fact-checkers, neglect AI-based misinformation detection and refuse to work with local governments to reduce real-time harm caused by false information on their platforms.
By early 2025 Vietnam will face growing challenges in handling misinformation. Without independent fact-checkers or direct social media oversight, managing false information becomes more difficult.
Not every case is clear-cut or urgent enough for government intervention.
In the short and medium terms, the only solution is improving public news literacy through education. Without it, misinformation will not just fuel pointless online fights but will also erode trust in reliable news sources, weaken journalism and increase risks of social instability in the face of crises.
*Lang Minh is currently a senior education consultant for the MindX startup education ecosystem.