Ontra is the global leader in AI legal tech for private markets. Powered by industry-leading AI, data from 1M+ contracts, and a global network of legal professionals, Ontra's private markets technology platform streamlines and optimizes critical legal and compliance workflows across the full fund lifecycle. Ontra’s purpose-built solutions automate contracts, streamline obligation management, digitize entity management, and surface insights.
SpeedLegal is an AI contract negotiator that helps startups save $1k/contract ~$140k+/year when reviewing contracts using AI. Business people using SpeedLegal easily spot contract risks, negotiate better terms, save 75% of their time & boost deal closures 3X.
Definely is a leading provider of LegalTech solutions for drafting, reviewing, and understanding legal documents.
Luminance is the pioneer in Legal-Grade™ AI, wherever computer meets contract. Using a Mixture of Experts approach - known as the “Panel of Judges” - Luminance brings specialist AI to every touchpoint a business has with its contracts, from generation to negotiation and post-execution analysis. Developed by AI experts from the University of Cambridge, Luminance's technology is trusted by 700+ customers in 70+ countries, from AMD and the LG Group to Hitachi, BBC Studios and Staples.
Spellbook is an AI-powered contract drafting and review tool designed for legal professionals. Integrated directly into Microsoft Word, it leverages advanced language models, such as OpenAI's GPT-4, to assist lawyers in drafting, reviewing, and managing contracts more efficiently. Key features include generating new clauses based on context, detecting aggressive terms, and suggesting missing clauses to enhance contract quality.
Built on App Orchid's state of the art AI platform, ContractAI is an AI-powered SaaS-based Advanced CLM solution that automates and streamlines the analysis, creation and negotiation of contracts. ContractAI utilizes AI to automatically ingest and analyze historical contracts to author templates based on terms that were proven win-win. ContractAI eliminates the painful redlining process by giving suppliers vetted clause options.
Covington's analysis examines the October 2024 National Security Memorandum requiring AISI to conduct voluntary preliminary testing of frontier AI models for national security threats including offensive cyber operations, biological/chemical weapons development, and autonomous malicious behavior. The assessment details new requirements for agencies to implement AI risk management practices, testing protocols, and classified evaluations while building on NIST's dual-use foundation model guidelines. This authoritative national security law analysis demonstrates how the Biden administration's AI NSM establishes comprehensive governance frameworks for military and intelligence AI deployment while requiring private sector cooperation in threat assessment and mitigation strategies.
This Harvard Law Review chapter tackles the “amoral drift” in AI corporate governance, warning that traditional tools—like board oversight and shareholder limits—fail to prevent companies like OpenAI and Anthropic from slipping toward profit-driven motives. It introduces the concept of “superstakeholders”—key talent and Big Tech backers whose equity-based influence can undermine an organization’s prosocial mission. The article also examines co-governance parallels, advocating democratic oversight structures that could anchor AI firms to ethical and societal objectives. Legal professionals and corporate counsel will want to dive into this piece to understand innovative governance mechanisms that balance existential AI risks with accountability.
This NCSL overview analyzes the surge of AI legislation across U.S. states in 2025, reporting on dozens of bills and task forces addressing everything from algorithmic bias to election disinformation. Legal practitioners will find this essential, as it synthesizes how states are shaping AI governance—providing insight into fast-moving, jurisdiction-specific trends and emerging compliance triggers. Click through to explore the full toolkit, tracked bills, and strategic guidance for navigating the evolving state-level legal landscape.
This Harvard Law Review “Developments in the Law” chapter examines the “creative double bind” that generative AI imposes on artists—offering powerful new tools while simultaneously threatening traditional copyright frameworks. It explores how this tension manifests differently across creative communities—from screenwriters to choreographers—depending on their varying attachments to existing IP protections. The piece spotlights how strategies like private negotiations, as seen in the WGA writers’ strike, could provide models for adapting copyright rules to balance innovation and protection . IP practitioners and policy experts will find this essential reading for its nuanced analysis and practical roadmap for navigating AI’s impact on creative industries—click through to explore its compelling doctrinal insights.
This Harvard Law Review article argues that current antidiscrimination laws—built for human decision-making—are ill-suited to handle algorithmic bias in the age of AI. It critiques the limitations of intent-based frameworks and disparate impact analysis under Supreme Court precedents, urging a doctrinal reset to ensure fairness in AI‑driven decision systems. The piece proposes modernizing legal tools—such as recalibrating Title VII and equal protection tests—to oversee AI outputs and mandate transparent auditing, empowering attorneys and regulators to combat hidden model unfairness. Legal professionals will want to read the full article to explore concrete strategies for integrating algorithmic accountability into established civil rights regimes.
The Harvard Law Review article advocates a co-governance model for AI regulation that involves governments, industry, civil society, and impacted communities working collaboratively. It argues traditional top-down rules fall short for AI’s complexity and urges transparency, inclusivity, and shared responsibility. This approach aims to balance innovation with accountability, embedding ethical oversight and continuous stakeholder dialogue. Legal professionals and policymakers will find this framework essential for crafting adaptable, equitable AI governance in an evolving tech landscape.