Paxton is an innovative legal technology firm transforming the legal landscape. Our vision is to equip legal professionals with an AI assistant that supercharges efficiency, enhances quality, and enables extraordinary results.
Developer of an document review platform designed to help law firms automate the reviewing process and find relevant evidence. The company's platform uses artificial intelligence to find evidence to support clients' cases, instantly view events timelines, autogenerate tags, and auto-categorize documents, helping lawyers to unearth critical evidence, and auto-generate comprehensive timelines.
DocLens.ai is a Software as a Service (SaaS) platform that leverages artificial intelligence (AI) and machine learning (ML) to assist insurance professionals in managing legal risks associated with liability claims and complex document reviews. The platform is designed to process both structured and unstructured data, including various types of documents, to extract critical information and provide actionable insights.
Wexler establishes the facts in any contentious matter, from an internal investigation, to international litigation to an employee grievance. Disputes of any kind rely on a deep understanding of the facts. With Wexler, legal, HR, compliance , forensic accounting and tax teams can quickly understand the facts in any matter, reducing doubt, saving critical time and increasing ROI, through more successful outcomes and fewer written off costs.
DeepJudge is the core AI platform for legal professionals. Powered by world-class enterprise search that serves up immediate access to all of the institutional knowledge in your firm, DeepJudge enables you to build entire AI applications, encapsulate multi-step workflows, and implement LLM agents.
Alexi is the premier AI-powered litigation platform, providing legal teams with high-quality research memos, pinpointing crucial legal issues and arguments, and automating routine litigation tasks.
Cardozo Law Review's empirical research demonstrates how AI hiring algorithms trained on predominantly male datasets systematically replicate gender bias, as seen in Amazon's algorithm that downgraded women candidates. The analysis reveals fundamental measurement challenges in employment AI unlike medical AI, where researchers cannot easily determine if rejected female candidates would outperform hired males. This academic study exposes the technical limitations of bias auditing in hiring contexts and calls for structural reforms to prevent AI from codifying historical workplace discrimination.
Comprehensive analysis of 13 global AI laws reveals unprecedented regulatory activity with U.S. states introducing 400+ AI bills in 2024, six times more than 2023, while the EU AI Act creates binding requirements for high-risk hiring systems. The research highlights critical compliance challenges as NYC's bias audit requirements, Colorado's impact assessments, and India's anti-discrimination mandates create a complex patchwork of overlapping obligations. HR professionals must navigate ADA accommodations, Title VII compliance, and emerging state-specific AI regulations while ensuring algorithmic fairness across diverse jurisdictions.
Oxford Journal's research reveals how AI developers have become increasingly secretive about training datasets as copyright litigation intensifies, prompting global calls for mandatory transparency requirements. The analysis examines the EU AI Act's groundbreaking training data disclosure mandates and G7 principles requiring transparency to protect intellectual property rights. This scholarly assessment demonstrates how transparency obligations could enable rightsholder enforcement while balancing innovation needs, offering a potential regulatory solution to the copyright-AI training data conflict.
Civil rights firm's analysis exposes how AI bias in hiring systematically discriminates against marginalized groups, with nearly 80% of employers now using AI recruitment tools despite documented gender and racial discrimination like Amazon's scrapped recruiting engine. The EEOC's new initiative to combat algorithmic discrimination reflects mounting legal challenges as biased datasets perpetuate workplace inequality across healthcare, employment, and lending. This practitioner perspective emphasizes the urgent need for human oversight and ethical AI frameworks to prevent civil rights violations in an increasingly automated hiring landscape.
USC's legal analysis explores landmark AI copyright litigation including Authors Guild v. OpenAI and NYT v. Microsoft, where publishers claim AI training violates copyright through unauthorized use of millions of articles. The piece contrasts China's progressive stance recognizing AI-generated content copyright with the U.S.'s unresolved fair use debates, highlighting how courts must balance AI innovation against creator rights. As proposed federal legislation like the Generative AI Copyright Disclosure Act advances, this analysis illuminates the critical legal battles shaping AI's future in creative industries.
The EU AI Act becomes enforceable law spanning 180 recitals and 113 articles, imposing maximum penalties of €35 million or 7% of worldwide annual turnover for non-compliance. The regulation's phased implementation begins with prohibited AI practices in February 2025, followed by transparency requirements for general-purpose AI models and full enforcement by August 2026. This comprehensive framework establishes the legal foundation for AI governance across all 27 EU member states, creating immediate compliance obligations for any organization deploying AI systems that impact EU markets.