Artificial intelligence is rapidly reshaping how we access information, make decisions, and interact with the world. But what happens when the AI systems we rely on are programmed to manipulate rather than inform? With the recent release of DeepSeek, a Chinese-developed AI model, concerns about politically biased AI need to take center stage.
Advertisement
While DeepSeek has been hailed for its advanced capabilities and open-source accessibility, many people seem to be ignoring the blatant ideological manipulation at play. Reports indicate that DeepSeek systematically aligns with the Chinese Communist Party’s (CCP) official stance on controversial topics, raising a critical question: If AI can be weaponized to control narratives in China, how can we be sure that similar tactics are not embedded into AI systems worldwide?
DeepSeek and CCP Censorship
DeepSeek has gained rapid popularity, with its open-source framework allowing developers worldwide to integrate its capabilities into various applications. Yet, beneath this seemingly democratized approach lies a stark reality—DeepSeek appears to be programmed to conform to CCP propaganda.
When asked about topics such as the Tiananmen Square massacre, persecution of Uyghur Muslims, or Taiwan’s sovereignty, DeepSeek either dodges the question or parrots Beijing’s official rhetoric. This is not a bug—it’s a feature. Unlike Western AI models, which, for all their flaws, still allow for a broader range of discourse, DeepSeek operates within strict ideological parameters. It’s a stark reminder that AI is only as objective as the people—or governments—who control it.
In some videos circulating online, DeepSeek users show how the AI sometimes attempts to answer questions that relate to restricted topics before being overridden with a message reading, “Sorry, that’s beyond my current scope. Let’s talk about something else.” DeepSeek is free to discuss the darker aspects of American history, but when asked to talk about unflattering parts of Chinese history, suddenly, the topic is once again outside its “current scope.”
Advertisement
Implications of AI Manipulation
DeepSeek is not just a China problem. It’s a wake-up call for the rest of the world. AI has the power to shape perception, filter reality, and subtly guide users toward predetermined conclusions. If governments or corporations embed their own biases into these systems, then the AI we trust to inform us will instead become a tool for ideological enforcement.
We have already seen glimpses of this in the West with AI models trained on politically fashionable narratives over neutral analysis. Large AI developers have been accused of tweaking their models to suppress dissenting viewpoints or emphasize certain ideological perspectives. In a notable example that garnered much attention in 2024, Google’s Gemini generative AI tool would produce images of black founding fathers and Nazis, despite the historical inaccuracies. Google, in an effort to increase diversity in Gemini’s outputs, overcompensated for supposed racial bias.
The question we must ask ourselves is simple: If AI can be programmed to push a state-sponsored narrative in China, what’s stopping corporations, activist organizations, or even Western governments from doing the same?
Don’t think American companies would stop at weighting their algorithms to ensure diversity. Over the past few years, we’ve seen a growing trend of corporations aligning themselves with Environmental, Social, and Governance (ESG) metrics. This framework prioritizes social justice causes and other politically charged issues, distorting how companies operate. Over the same period of time, many social media companies have taken aggressive steps to suppress content considered “misinformation.”
Advertisement
The AI industry is emerging rapidly, often with little transparency and oversight. As a result, the values embedded within these systems are determined by the creators. Without transparency and accountability, AI could become the most powerful propaganda tool in human history—capable of filtering search results, rewriting history, and nudging societies toward preordained conclusions.
This danger is compounded by the rapid adoption of AI across industries. If these biases become standard across major AI models, users could unknowingly be subjected to a curated, censored version of reality. Today, it’s DeepSeek following Beijing’s directives. Tomorrow, it could be an American AI system programmed to align with ESG standards, or to suppress politically inconvenient truths under the guise of “misinformation control.
Lightning in a Bottle: VP JD Vance Exhorts AI Action Summit to Focus on Opportunity, Not Fear
Nvidia Stock in Free Fall As Chinese Company Spooks Tech Industry With Inexpensive New AI Offering
Vigilance Is Imperative
This moment demands vigilance. The public must recognize the power AI has over the flow of information and remain skeptical of models that show signs of ideological manipulation. Scrutiny should not be reserved only for AI developed in adversarial nations but also for models created by major tech companies in the United States and Europe.
AI leaders must commit to producing truly objective, bias-free systems. This means ensuring transparency in training data and resisting pressure—whether from governments, corporations, or activist groups—to embed biased narratives into AI. The future of AI should be built on trust, not agenda-driven programming.
Advertisement
AI is poised to become one of the most influential forces in modern society. Whether it remains a tool for free thought or becomes an instrument of ideological control and tyranny depends on the choices made today.
DeepSeek has provided a glimpse into a world where AI is used to enforce state-approved narratives. If we fail to confront this issue now, we may wake up in a future where AI doesn’t just provide answers—it decides which questions are even allowed to be asked.
Donald Kendal ([email protected]) is the director of the Emerging Issues Center at The Heartland Institute. Follow @EmergingIssuesX.