Trump’s AI Policy Takes Aim at Bias, Shifts Focus to Infrastructure and Global Competition

The Trump administration has launched a series of new moves around artificial intelligence, combining infrastructure expansion with stricter ideological oversight of how AI systems are developed and used. These actions include a detailed national roadmap and several executive orders that prioritize American-made AI tools while barring the use of models seen as carrying political bias.

Government Contracts Now Exclude “Woke” AI

One of the most immediate changes comes from an order that blocks federal agencies from purchasing AI systems seen as non-neutral or influenced by progressive values. The language of the order specifically points to models that address race, gender, bias, or social identity as disqualifying for government use. Developers are now expected to avoid embedding concepts such as critical race theory or diversity frameworks in their training data and output.

This requirement creates uncertainty for companies that rely heavily on federal funding to build or maintain large AI models. If firms hope to win contracts, they may feel pressured to adjust how their tools behave or what types of information they emphasize.

National Plan Targets Infrastructure and Deregulation

Alongside the executive order, the White House released a 28-page plan that outlines over 90 actions federal agencies should take in the coming year. The plan focuses on building new data centers, increasing domestic AI capacity, and reducing restrictions on private innovation.

It also calls for a review of current federal rules that might slow AI development and urges agencies to remove them where possible. While the plan promotes American tech competitiveness, critics say it does so by placing corporate priorities ahead of social responsibility.

Officials inside the administration describe the effort as part of a global race, particularly with China. Some companies in China have already launched AI systems that refuse to criticize the Chinese government, raising concerns among U.S. experts about foreign ideological control in tech. The administration argues this justifies pushing forward without limiting domestic AI firms through additional safeguards.

Some Companies Already Tied to Government Funding

Just last week, several major AI developers including OpenAI, Google, Anthropic, and xAI received federal contracts worth up to $200 million each. These deals are meant to support national defense efforts using AI workflows.

Of those firms, xAI appears to align most closely with the new ideology-focused rules. Its chatbot Grok was designed with built-in instructions to challenge mainstream views and avoid aligning with what it calls conventional narratives. These prompts have led to offensive and politically extreme content at times, including comments that were widely condemned as hateful.

Despite that, Grok was recently approved for federal use. It now appears on the General Services Administration schedule, making it eligible for deployment across government agencies.

Broader Questions Around Bias and Objectivity

Legal experts have raised concerns about whether this policy change amounts to political discrimination in practice. By focusing on a narrow idea of neutrality, some say the government is applying ideological pressure under the label of fairness. There are also open questions about how bias is defined and who gets to decide what qualifies as truth.

The order itself defines neutrality as AI that avoids promoting social ideologies, while truth-seeking is described as accuracy driven by history and science. But interpretations of those terms vary. Developers now face the challenge of navigating both technical demands and shifting political goals.

Some former officials argue that removing earlier safeguards weakens trust in AI. They say it risks turning innovation into a political tool, especially if the systems start reflecting the opinions of a single administration or group.

Plan Draws Strong Criticism from Public Advocates

In response to the new roadmap, several advocacy organizations expressed concern about the influence of large tech firms. Critics argue that the new policy was crafted in ways that benefit industry leaders while ignoring the broader impact on everyday users of AI.

The document also drew backlash for removing previous protections created under the Biden administration. That earlier framework had emphasized transparency, safety, and oversight of how AI is used in government systems. Those rules were among the first to be revoked by Trump after taking office.

Since then, Trump has pushed to promote American-made AI overseas. One of the new executive orders specifically supports expanded AI exports. Another reverses a previous ban, allowing companies like Nvidia to resume selling high-end chips to Chinese clients, a move that caught some observers off guard.

Pressure Builds Around Local AI Laws

The AI plan nearly included language that would have blocked states from regulating AI systems for ten years. Although that clause was removed before Congress passed the budget bill, its appearance signals that the administration is still looking for ways to prevent local oversight of AI technology.

Even within federal discussions, there’s growing debate over how to manage risk without slowing progress. While officials say the U.S. must stay ahead of its rivals, some warn that innovation without safeguards could have unintended consequences, both for national security and for the public.


Image: WhiteHouse / X. Note: This post was edited/created using GenAI tools.

Read next: Google Search and YouTube Drive Alphabet’s 14% Revenue Surge, AI Overviews Hit 2 Billion Users

Previous Post Next Post