Ontra is the global leader in AI legal tech for private markets. Powered by industry-leading AI, data from 1M+ contracts, and a global network of legal professionals, Ontra's private markets technology platform streamlines and optimizes critical legal and compliance workflows across the full fund lifecycle. Ontra’s purpose-built solutions automate contracts, streamline obligation management, digitize entity management, and surface insights.
SpeedLegal is an AI contract negotiator that helps startups save $1k/contract ~$140k+/year when reviewing contracts using AI. Business people using SpeedLegal easily spot contract risks, negotiate better terms, save 75% of their time & boost deal closures 3X.
Definely is a leading provider of LegalTech solutions for drafting, reviewing, and understanding legal documents.
Luminance is the pioneer in Legal-Grade™ AI, wherever computer meets contract. Using a Mixture of Experts approach - known as the “Panel of Judges” - Luminance brings specialist AI to every touchpoint a business has with its contracts, from generation to negotiation and post-execution analysis. Developed by AI experts from the University of Cambridge, Luminance's technology is trusted by 700+ customers in 70+ countries, from AMD and the LG Group to Hitachi, BBC Studios and Staples.
Spellbook is an AI-powered contract drafting and review tool designed for legal professionals. Integrated directly into Microsoft Word, it leverages advanced language models, such as OpenAI's GPT-4, to assist lawyers in drafting, reviewing, and managing contracts more efficiently. Key features include generating new clauses based on context, detecting aggressive terms, and suggesting missing clauses to enhance contract quality.
Built on App Orchid's state of the art AI platform, ContractAI is an AI-powered SaaS-based Advanced CLM solution that automates and streamlines the analysis, creation and negotiation of contracts. ContractAI utilizes AI to automatically ingest and analyze historical contracts to author templates based on terms that were proven win-win. ContractAI eliminates the painful redlining process by giving suppliers vetted clause options.
This in-depth analysis by Chad A. Rutkowski of Baker & Hostetler unpacks the February 2025 Thomson Reuters v. Ross Intelligence decision, where the Delaware court granted summary judgment for Thomson Reuters—holding that Ross’s use of Westlaw headnotes to train its AI tool did not qualify as fair use. The court made clear that the training was commercial, non-transformative, and created a market substitute, emphasizing that even “intermediate” copying without substantial transformation can fail fair use protections. This ruling matters deeply to legal professionals: it sets a critical precedent that AI developers must carefully secure proper licenses for copyrighted training data, with fair use defenses facing steep judicial scrutiny. With broader implications for generative AI platforms like OpenAI or Meta, this piece is essential reading for IP counsel advising on AI-data compliance and litigation strategy—making it a must-click for forward-thinking practitioners.
This insightful analysis tackles a rarely addressed issue—whether AI trainers can claim copyright ownership over the output they help generate. Rutkowski dives into scenarios like image-generation with Midjourney, questioning if human prompts and curatorial choices elevate trainers into co-authorship roles. This matters for legal professionals advising AI developers and users, as it introduces new contours to authorship, licensing, and ownership arguments that could reshape IP strategies. By spotlighting the nuanced interplay between human input and AI-generated results, the piece urges counsel to proactively clarify rights, licensing, and attribution in training workflows—making it a vital read for forward-thinking IP practitioners.
This in-depth analysis by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Melyssa Eigen, Ned Terrace, Stephanie Thomas, and Mengyi Xu examines Colorado’s proposed amendment to extend its AI governance and risk-management regulations—originally for life insurers—to auto and health insurance providers. Highlighting key updates like bias evaluations, board-level oversight, clear consumer explanation of adverse AI-driven decisions, and mandated human oversight in healthcare determinations, the authors draw attention to near-term compliance deadlines in 2025. It underscores a shift, showing how state regulators are preemptively integrating AI into sector-specific governance, offering legal and compliance teams concrete preparation steps. This matters to legal professionals because it signals growing state-level enforcement of AI accountability in insurance, helping counsel advise insurers on bridging internal policies with emerging regulatory frameworks.
This analysis—authored by Megan K. Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles—breaks down a pivotal February 2025 ruling in Tremblay v. OpenAI, where a U.S. federal court ordered OpenAI to produce its full training dataset for GPT‑4 in a copyright infringement case. The ruling underscores that courts are now treating training data as central to proving direct AI‑related copyright claims, even amidst the tension between discovery obligations and trade‑secret protection. For legal professionals, this marks a significant escalation in e‑discovery strategy: practitioners must now advise AI developers on balancing transparency, litigation readiness, and data security under protective orders. By spotlighting emerging standards for dataset disclosure, the article offers invaluable insight for litigators, in‑house counsel, and compliance teams managing AI‑driven legal risk.
This detailed analysis from Debevoise & Plimpton explores how the European Union’s AI Act intersects with insurance industry practices, offering a risk-tiered framework that directly impacts underwriting, fraud detection, and customer service AI tools. The article contrasts the EU's prescriptive regime with the UK's more flexible, principles-based oversight, helping legal and compliance teams understand diverging global regulatory landscapes. A standout insight is that many insurer AI use cases may fall outside the AI Act’s strictest categories—but existing frameworks like Solvency II, DORA, and the IDD already impose significant governance and transparency expectations. With August 2026 compliance deadlines looming, this piece provides insurers and their counsel with a practical roadmap to prepare for cross-jurisdictional AI oversight.
This analysis from Annie Dulka explores how AI applications in international human rights law are reshaping legal frameworks—from refugee protection and due‐process to surveillance governance. It outlines both innovative benefits (e.g., enhanced monitoring, rapid documentation) and legal risks (e.g., bias in asylum decisions, privacy violations), arguing that robust oversight and principled deployment are essential to align AI tools with international human rights norms. This matters significantly for legal professionals navigating cross-border AI use, as it offers a practical roadmap for integrating AI ethics into treaty interpretation, case law, and compliance mechanisms. Engaging and authoritative, the piece encourages lawyers and policymakers to proactively shape AI deployment in human rights contexts—making it a compelling entry-point for those advising on global AI governance.