This Harvard Law Review article argues that current antidiscrimination laws—built for human decision-making—are ill-suited to handle algorithmic bias in the age of AI. It critiques the limitations of intent-based frameworks and disparate impact analysis under Supreme Court precedents, urging a doctrinal reset to ensure fairness in AI‑driven decision systems. The piece proposes modernizing legal tools—such as recalibrating Title VII and equal protection tests—to oversee AI outputs and mandate transparent auditing, empowering attorneys and regulators to combat hidden model unfairness. Legal professionals will want to read the full article to explore concrete strategies for integrating algorithmic accountability into established civil rights regimes.