Subscribe to our FREE Newsletter, or Telegram and WhatsApp channels for the latest stories and updates.
The recent discourse surrounding the Malaysian Communications and Multimedia Commission (MCMC) and its scrutiny of AI tools like Grok reveals a concerning disconnect in how we approach technology and governance.
The issue at hand, that generative AI can be used to create inappropriate or “random person in a bikini” images, is not new.
Photo-editing software has been capable of this for decades.
The difference is merely the barrier to entry.
Yet, the reaction from our regulators suggests a fundamental misunderstanding of the problem.
Blaming the software for the content it generates is intellectual laziness.
It is a convenient way to sidestep the harder, more necessary conversation about individual accountability.
The Logic of the Knife
Consider a simple analogy: If an individual uses a knife to commit a violent crime, the justice system targets the individual.
We charge the perpetrator.
We do not ban knives, nor do we mandate that knife manufacturers make their blades blunt to prevent misuse.
We understand implicitly that the tool is neutral; the intent lies with the user.
Yet, when it comes to technology and digital speech, this logic seems to evaporate. Instead of enforcing responsibility, authorities appear keen to take the path of least resistance: censor the tool.
This approach betrays a lack of nuance in our leadership.
It suggests that the authorities do not trust the citizenry to act responsibly, nor do they trust their own ability to enforce laws against harassment, defamation, or obscenity without resorting to a blanket ban.
A Culture of Permission, Not Judgment
The danger here extends beyond the inconvenience of losing access to a specific AI tool.
The real threat is the precedent it sets for Malaysian society.
Instead of building a culture of accountability, in which individuals recognise they are responsible for their digital footprint, we are conditioning the public to believe that censorship is a normal, even virtuous, tool of governance.
This is psychologically and civically dangerous.
It trains citizens to look to authority for permission rather than exercising their own judgment.
It suggests that if a tool is available, it is the government’s job to baby-proof it, rather than the user’s job to wield it wisely.
The Slippery Slope of “Protection”
We must be wary of the justifications used for these restrictions.
Today, the banner is “protecting people from harmful content.”
This is a sentiment no one disagrees with in principle.
However, when the mechanism for protection is broad censorship, the definition of “harm” becomes fluid.
Today, it is deepfakes. Tomorrow, will it be “protecting social harmony”? The day after, “protecting national stability”?
This is how freedom erodes.
It does not vanish overnight in a coup; it is chipped away through constant, small restrictions justified by moral panic.
If we establish that the solution to potential misuse is to ban the technology entirely, we hand the government a blank check to restrict any platform that challenges the status quo or makes them uncomfortable.
The Path Forward
The problem is not the software.
The problem is people misusing it, and a governance structure that is either too weak or too unimaginative to enforce accountability, choosing instead to hit the “block” button.
We need robust laws that punish the act of creating non-consensual deepfakes or harassment.
We need to prosecute the individuals who weaponise these tools.
But we must not demonise the technology itself.
Political correctness and “safety” provide the excuse.
Fear provides the compliance.
But if we accept censorship in place of responsibility, we are not just regulating software; we are regulating thought.
It is time for the authorities to stop policing the tools and start policing the actions.
Jane Teo Jia Yun is the Founder of Maxima Clarity, an AI SaaS infrastructure platform dedicated to advancing long-term technological progress. An active leader in the AI industry since 2018, Jane champions a philosophy of responsible, human-centric, and profit-first artificial intelligence. A National University of Singapore (NUS) graduate, Teo is a vocal advocate for free-market innovation and actively speaks out against state-centric control models. She argues that centralised government control risks converting AI into an instrument of surveillance, economic suppression, and innovation stagnation. Beyond her work in technology, Teo is a dedicated humanitarian committed to preserving human dignity. She is actively involved in the fight against human trafficking, sex trafficking, and forced organ harvesting, working to expose and eradicate these abuses globally.
Share your thoughts with us via TRP’s Facebook, Twitter, Instagram, or Threads.