Cardozo Law Review's empirical research demonstrates how AI hiring algorithms trained on predominantly male datasets systematically replicate gender bias, as seen in Amazon's algorithm that downgraded women candidates. The analysis reveals fundamental measurement challenges in employment AI unlike medical AI, where researchers cannot easily determine if rejected female candidates would outperform hired males. This academic study exposes the technical limitations of bias auditing in hiring contexts and calls for structural reforms to prevent AI from codifying historical workplace discrimination.