A long-simmering legal challenge against Workday has reached a critical turning point, one that could reshape how enterprises deploy AI in hiring. A federal court has authorized notices to be sent to potential plaintiffs in a landmark case alleging that Workday’s AI-driven hiring tools discriminate against certain job seekers.
The case, Mobley v. Workday, Inc., is unfolding in the U.S. District Court for the Northern District of California and stems from a 2023 lawsuit claiming that Workday’s algorithms screened out qualified applicants based on protected characteristics, such as age and race. The court has now allowed the central age-discrimination claim to proceed as a collective action, enabling other affected individuals to opt in.
The ruling is a shot across the bow for the HR tech industry. With Workday software powering finance and HR operations for more than 65% of the Fortune 500 – including 70% of the top 50 companies – and serving customers in 175 countries, the implications could reach far beyond the software provider.
Inside the Discrimination Case
At the heart of the lawsuit is Derek Mobley, a Black man over 40, who says he applied for more than a hundred roles over several years at employers using Workday’s recruitment tools but was consistently rejected, often within minutes or overnight. That speed, he argues, suggests Workday’s automation filtered out his application, not the employer he applied to.
His complaint alleges that Workday’s technology filtered him out unfairly, violating the U.S. Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act (ADEA). In early 2025, he sought court authorization to pursue his age-discrimination claim as a collective action.
Workday moved to dismiss the complaint, asserting that it’s the employers, not the software vendor, who make hiring decisions. The company stated that its platform merely assists clients by organizing and ranking applicants. But Judge Rita F. Lin ruled that the case could proceed, citing the possibility that Workday’s algorithms materially influence outcomes in ways that warrant legal scrutiny.
That ruling opens new ground. Historically, employment-discrimination laws have targeted employers directly, not their vendors. The question now is whether a software provider can be held liable when its algorithms drive key stages of candidate evaluation. The potential class size—possibly in the tens of millions—has drawn comparisons to the largest discrimination cases ever filed in the U.S.
Workday has publicly denied all allegations. Its Chief Responsibility Officer, Kelly Trindel, emphasized that “Workday AI does not make hiring decisions and is not designed to automatically reject candidates,” adding that customers maintain human oversight throughout recruitment. But as the case moves forward, HR leaders are questioning what it means for their own use of Workday’s technology and what steps they should take while the legal dust settles.
Understanding Workday’s AI and What HR Teams Should Do
To understand the debate, it helps to look at what Workday’s AI tools actually do.
The company’s platform has long used automation to help employers screen large volumes of applications, identify qualified candidates, and generate consistent job requisitions. These features promise to streamline hiring and reduce bias.
AI has been taking a bigger role in Workday’s ecosystem as the technology matures. In 2026, Workday expanded its ecosystem to include the Paradox Conversational Applicant Tracking System, an AI-driven tool designed to accelerate frontline hiring. The company also announced a forthcoming generative assistant, Frontline Agent, designed to help recruiters and HR professionals manage day-to-day candidate interactions more efficiently.
In theory, these tools free up HR teams to focus on human judgment rather than administrative tasks. However, automation in hiring brings heightened legal risk.
The U.S. Equal Employment Opportunity Commission (EEOC) has already warned that employers remain accountable for discriminatory outcomes stemming from AI tools they use. However, the Mobley case potentially extending liability to third-party vendors makes this even more legally complex.
For HR professionals using Workday or similar systems, caution is now the name of the game. A successful suit against Workday could potentially open companies using the platform to litigation as well.
HR professionals using Workday should take a cautious, defensible approach: pause or limit the use of automated screening tools in high-risk areas, particularly when AI filters candidates before human review; conduct regular bias audits; work closely with vendors to verify compliance with equal-opportunity laws; document clear human oversight for every hiring decision to ensure accountability; and seek legal counsel before implementing or expanding algorithmic screening, especially in jurisdictions introducing new AI hiring regulations.
In short, HR departments should not assume vendor compliance equals organizational compliance.
What Comes Next for Workday and AI Hiring
The Mobley v. Workday case highlights the growing tension between innovation and accountability in enterprise HR software. As regulators, courts, and public opinion converge on questions of algorithmic fairness, technology providers face mounting pressure to prove that their systems ensure equity in hiring.
For Workday, the case is more than a reputational issue, it’s a legal test that could define whether AI vendors can be held responsible in the eyes of the law. That threshold, if crossed, would send shockwaves through the broader HR tech sector.
Other vendors that embed AI-driven scoring or matching systems may soon find themselves under similar scrutiny. Meanwhile, HR leaders should anticipate a reevaluation of vendor relationships and risk management policies. Many may temporarily disable automated candidate-screening tools while awaiting legal clarity. Others may look to deploy explainable AI frameworks that allow hiring teams to understand and justify algorithmic recommendations.
The broader message is clear: AI efficiency can no longer come at the expense of transparency. If courts find that algorithmic screening constitutes employment decision-making, it could redefine the compliance obligations of both software providers and their enterprise clients.