Ontra is the global leader in AI legal tech for private markets. Powered by industry-leading AI, data from 1M+ contracts, and a global network of legal professionals, Ontra's private markets technology platform streamlines and optimizes critical legal and compliance workflows across the full fund lifecycle. Ontra’s purpose-built solutions automate contracts, streamline obligation management, digitize entity management, and surface insights.
SpeedLegal is an AI contract negotiator that helps startups save $1k/contract ~$140k+/year when reviewing contracts using AI. Business people using SpeedLegal easily spot contract risks, negotiate better terms, save 75% of their time & boost deal closures 3X.
Definely is a leading provider of LegalTech solutions for drafting, reviewing, and understanding legal documents.
Luminance is the pioneer in Legal-Grade™ AI, wherever computer meets contract. Using a Mixture of Experts approach - known as the “Panel of Judges” - Luminance brings specialist AI to every touchpoint a business has with its contracts, from generation to negotiation and post-execution analysis. Developed by AI experts from the University of Cambridge, Luminance's technology is trusted by 700+ customers in 70+ countries, from AMD and the LG Group to Hitachi, BBC Studios and Staples.
Spellbook is an AI-powered contract drafting and review tool designed for legal professionals. Integrated directly into Microsoft Word, it leverages advanced language models, such as OpenAI's GPT-4, to assist lawyers in drafting, reviewing, and managing contracts more efficiently. Key features include generating new clauses based on context, detecting aggressive terms, and suggesting missing clauses to enhance contract quality.
Built on App Orchid's state of the art AI platform, ContractAI is an AI-powered SaaS-based Advanced CLM solution that automates and streamlines the analysis, creation and negotiation of contracts. ContractAI utilizes AI to automatically ingest and analyze historical contracts to author templates based on terms that were proven win-win. ContractAI eliminates the painful redlining process by giving suppliers vetted clause options.
This Perspective by Ben Chester Cheong (Singapore University of Social Sciences & Cambridge) offers a comprehensive legal–ethical review of transparency and accountability challenges in AI systems governing human wellbeing . It structures the discussion into four pillars—technical explainability methods, regulatory frameworks, ethical safeguards, and multi‑stakeholder collaboration—highlighting how each area plays a vital role in ensuring trust and societal resilience. Legal professionals will appreciate its actionable framework that bridges tech, ethics, and governance, making it a timely resource amid emerging regulations like GDPR “right to explanation” and EU AI Act mandates. By offering strategic clarity and policy cohesion, this article equips lawyers, compliance leaders, and policymakers with tools to embed transparency and accountability into AI systems that shape lives—making it a must‑read for anyone advising on responsible AI deployment.
This essay by Margot E. Kaminski and Meg Leta Jones explores how current legal frameworks actively construct AI-generated speech, rather than being passively disrupted by it. It introduces the “legal construction of technology” method—analyzing how laws like the First Amendment, content moderation, risk regulation, and consumer protection actively interpret and shape AI speech. This analysis matters to legal professionals because it reveals that existing institutions and norms already provide structured pathways for meaningful oversight, shifting the conversation from reactive problem-solving to proactive values-based policy design. By demonstrating that law and AI co-evolve through these intentional constructions, this piece empowers lawyers and policymakers to craft more effective, principled governance—prompting deeper engagement with the field.
This article by Graham H. Ryan analyzes how generative AI challenges the legal immunity conferred by Section 230 of the Communications Decency Act—and why that protection may crumble under new judicial scrutiny. Ryan argues that generative AI systems “create or develop content” and thus likely fall outside Section 230’s existing scope, exposing providers to increased liability for design decisions and algorithmic contributions. It matters for legal professionals because emerging case law may redefine liability standards—from co-authoring content to design-based claims—signaling a pivotal shift in AI governance and internet law that practitioners need to monitor closely. By framing generative AI as a catalyst for reevaluating the legal foundations of internet speech, the article urges lawyers to proactively reassess risk strategies and regulatory compliance in this evolving landscape.
This insightful analysis explores how China’s August 15, 2023 regulations on generative AI reflect a strategic choice to slow AI progress for social and political control. It reveals a key finding that Beijing’s cautious regulatory approach contrasts sharply with the innovation-first strategies of the U.S. and EU, granting other jurisdictions essential breathing room to develop responsible AI policies. Legal professionals will find this article timely and compelling, as it provides practical insight into how geopolitical AI maneuvering reshapes cross-border legal strategy, compliance, and tech governance. By positioning China’s policy as a global pivot point, this piece equips lawyers and policymakers with a nuanced understanding of how AI regulation is being shaped on the international stage—prompting further investigation and dialogue.
This analysis by Matt Blaszczyk, Geoffrey McGovern, and Karlyn D. Stanley explores how U.S. copyright law grapples with AI-generated content, addressing whether AI-assisted works qualify for protection and how training datasets are treated under both U.S. and EU doctrines. It highlights a key insight: U.S. law requires human “authorial contribution” for copyrightability, while the EU allows rights holders to challenge commercial AI training—underscoring a growing legal divide. Timely hearings in Congress and updates from the U.S. Copyright Office make this discussion urgent for legal professionals managing copyright risks in generative AI projects. The analysis empowers practitioners with a clear roadmap for navigating emerging policy debates, licensing strategies, and litigation landscapes around AI-authored content—making it essential reading for IP counsel advising on innovation-driven initiatives.
This expert analysis from Dentons partners Jennifer Cass, Anna Copeman, Sam Caunt, and David Wagget examines the unresolved IP challenges arising from generative AI in 2024 and the legal “cliffhangers” heading into 2025. It highlights key issues like copyright infringement during AI training, ownership of AI-generated works and inventions, and emerging litigation—such as Getty Images v. Stability AI. Legal professionals will find value in its forward-looking take on 2025 reforms, including licensing trends, contractual risk strategies, and pending court rulings. Written with practical insight, this piece equips lawyers with proactive tools to guide clients through rapidly evolving AI‑IP terrain.