Is AI about to get a whole lot less fair? The Trump administration's recent moves could dramatically reshape how AI is regulated in the workplace, potentially rolling back protections against algorithmic discrimination. But here's where it gets controversial: it all hinges on how we define 'fair' in the age of artificial intelligence.
Back in April 2025, my colleague Russ Anderson and I delved into the potential pitfalls of using AI in human resources (HR). We published an article highlighting the risks associated with AI-driven hiring processes and employee performance evaluations. We specifically addressed the growing body of laws designed to shield employees and job applicants from the potential negative impacts of AI in the workplace. A primary focus of these laws is preventing algorithmic discrimination – instances where AI systems unfairly disadvantage certain groups of people. Some laws, like the Colorado Artificial Intelligence Act (CAIA), even require companies to exercise 'reasonable care' to prevent such discrimination. Since then, the landscape has shifted significantly with the introduction of President Trump's AI Action Plan and related Executive Orders.
On July 23, 2025, the Trump administration unveiled “America’s AI Action Plan,” accompanied by three executive orders. These orders aim to advance three key pillars: accelerating AI innovation, building a robust American AI infrastructure, and establishing American leadership in international AI diplomacy and security. This sounds promising, right? But...
Executive Order 14179, issued earlier in January 2025 and referenced in the Action Plan's introduction, specifically calls for the revocation of existing AI policies and directives that are perceived as hindering American AI innovation. The rationale? To unleash the full potential of AI by removing unnecessary barriers to development.
Now, while a proposed 10-year moratorium on state and local AI regulations (floated in an earlier draft of the “Big Beautiful Bill Act”) didn't make it into the final version, the July 23, 2025, Action Plan still targets state-level AI regulations in several notable ways. And this is the part most people miss...
First, the Action Plan states that while the federal government shouldn't unduly interfere with states' rights to enact laws, it also asserts that federal AI-related funding shouldn't be directed towards states with AI regulations deemed "burdensome to innovation.” Think of it as a carrot-and-stick approach: states are free to regulate, but they might pay a price in terms of federal support.
Furthermore, the Action Plan directs the Office of Management and Budget (OMB) to collaborate with federal agencies that oversee AI-related discretionary funding programs. The goal? To ensure these agencies consider a state's AI regulatory climate when making funding decisions. The Plan explicitly states that funding should be limited if a state's regulations might impede the effectiveness of the funded program or award. The Federal Communications Commission (FCC) is also tasked with evaluating whether state AI regulations interfere with its ability to fulfill its responsibilities under the Communications Act of 1934. This could have vast and unforeseen implications across the tech sector.
But here's where it gets really interesting... The Action Plan emphasizes the development of AI systems that are "free from ideological bias or engineered social agendas." It goes on to recommend revising the framework developed by the National Institute of Standards and Technology (NIST) to eliminate references to Diversity, Equity, and Inclusion (DEI). This is a significant departure from the current emphasis on fairness and inclusivity in AI development.
Let's break down why this is so important. As we noted in our April 2025 article, Colorado's CAIA creates a “presumption of reasonable care” for employers using AI in hiring if they can demonstrate a risk management program aligned with recognized standards in the NIST framework. So, if the NIST framework is revised to remove all references to DEI, what happens to that presumption? Would companies no longer be required to prioritize workforce diversity when managing AI risks?
For example, the NIST framework currently states that a key function of AI risk management is ensuring that “[w]orkforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.” If DEI is removed, the implications could be profound.
But hold on, here's a counterpoint. States like Colorado could potentially respond by rejecting the revised NIST framework and instead requiring companies to adopt frameworks that do mandate consideration of workforce diversity as part of their risk management procedures. This would allow them to maintain the “presumption of reasonable care” under state law. In that scenario, federal agencies might then limit or eliminate earmarked federal funding to such states, as suggested in the Action Plan, leading to a potential showdown between state and federal priorities.
As of today, it's still too early to fully assess the impact of the July 23, 2025, Action Plan and the corresponding executive orders on AI regulation at both the state and federal levels. However, employers using AI to make personnel decisions need to stay informed about developments in AI regulation, especially in states where they operate or have employees. This vigilance is crucial to ensure compliance with all applicable laws.
What do you think? Should the federal government prioritize innovation over state-level efforts to ensure fairness and equity in AI? Is it even possible to create AI that's truly free from bias? Share your thoughts in the comments below!