AI powered legal research platform. It enables users to develop LLM according to the legal workflows. The platform provides frameworks to evaluate AI tools across practice areas.
Josef is a no-code platform designed for legal professionals to automate legal tasks, build and launch their own legal chatbots or services. It empowers lawyers, corporate counsel, and legal operations professionals to create digital legal tools.
Clearbrief is a tool designed for lawyers to evaluate legal writing in real-time, including their own work and that of opposing counsel. It aims to help lawyers prepare arguments more efficiently and communicate more effectively with judges, potentially enhancing their reputation with clients and courts. Clearbrief also offers features such as citation analysis and the ability to turn an opponent's writing into a draft response.
Trusli is an automation platform that leverages the power of large language models to automate contract reviews for in-house legal teams at enterprise organizations. We provide private AI that enhances efficiency and reduces costs, while ensuring legal teams maintain control and compliance. Trusli was acquired by Gruve AI in June 2024. We will continue to operate and serve our customers with the same commitment and excellence.
DraftWise is an AI-powered contract drafting and negotiation platform designed for transactional lawyers. It leverages a firm's existing knowledge base and past deals to improve the efficiency and accuracy of contract creation and review. DraftWise integrates with tools like Microsoft Word and document management systems to provide a unified view of a firm's collective knowledge.
FirstRead is an AI legal assistant designed for small and midsize law firms. It provides support by drafting legal documents, analyzing contracts, and managing legal tasks. It aims to increase efficiency and bandwidth for law firms without the traditional costs associated with hiring additional staff.
Cardozo Law Review's empirical research demonstrates how AI hiring algorithms trained on predominantly male datasets systematically replicate gender bias, as seen in Amazon's algorithm that downgraded women candidates. The analysis reveals fundamental measurement challenges in employment AI unlike medical AI, where researchers cannot easily determine if rejected female candidates would outperform hired males. This academic study exposes the technical limitations of bias auditing in hiring contexts and calls for structural reforms to prevent AI from codifying historical workplace discrimination.
Comprehensive analysis of 13 global AI laws reveals unprecedented regulatory activity with U.S. states introducing 400+ AI bills in 2024, six times more than 2023, while the EU AI Act creates binding requirements for high-risk hiring systems. The research highlights critical compliance challenges as NYC's bias audit requirements, Colorado's impact assessments, and India's anti-discrimination mandates create a complex patchwork of overlapping obligations. HR professionals must navigate ADA accommodations, Title VII compliance, and emerging state-specific AI regulations while ensuring algorithmic fairness across diverse jurisdictions.
Oxford Journal's research reveals how AI developers have become increasingly secretive about training datasets as copyright litigation intensifies, prompting global calls for mandatory transparency requirements. The analysis examines the EU AI Act's groundbreaking training data disclosure mandates and G7 principles requiring transparency to protect intellectual property rights. This scholarly assessment demonstrates how transparency obligations could enable rightsholder enforcement while balancing innovation needs, offering a potential regulatory solution to the copyright-AI training data conflict.
Civil rights firm's analysis exposes how AI bias in hiring systematically discriminates against marginalized groups, with nearly 80% of employers now using AI recruitment tools despite documented gender and racial discrimination like Amazon's scrapped recruiting engine. The EEOC's new initiative to combat algorithmic discrimination reflects mounting legal challenges as biased datasets perpetuate workplace inequality across healthcare, employment, and lending. This practitioner perspective emphasizes the urgent need for human oversight and ethical AI frameworks to prevent civil rights violations in an increasingly automated hiring landscape.
USC's legal analysis explores landmark AI copyright litigation including Authors Guild v. OpenAI and NYT v. Microsoft, where publishers claim AI training violates copyright through unauthorized use of millions of articles. The piece contrasts China's progressive stance recognizing AI-generated content copyright with the U.S.'s unresolved fair use debates, highlighting how courts must balance AI innovation against creator rights. As proposed federal legislation like the Generative AI Copyright Disclosure Act advances, this analysis illuminates the critical legal battles shaping AI's future in creative industries.
The EU AI Act becomes enforceable law spanning 180 recitals and 113 articles, imposing maximum penalties of €35 million or 7% of worldwide annual turnover for non-compliance. The regulation's phased implementation begins with prohibited AI practices in February 2025, followed by transparency requirements for general-purpose AI models and full enforcement by August 2026. This comprehensive framework establishes the legal foundation for AI governance across all 27 EU member states, creating immediate compliance obligations for any organization deploying AI systems that impact EU markets.