Robin AI is a leader in legal AI. Our Legal AI Assistant is used by hundreds of businesses globally to harness the power of generative AI for legal. We empower legal teams and lawyers to make contract processes effortless.
Pincites makes contract negotiations faster and more consistent for legal teams. Using advanced language models, Pincites allows legal teams to build robust contract playbooks that any internal team can apply consistently within Microsoft Word.
ThoughtRiver was founded in 2016 to transform third-party contract review. Over the past nine years, we’ve become a leader in the Legal Tech space, working with some of the world’s top legal teams and organizations. Our success is built on integrating human-led, legally trained data into our own LLM, ensuring accuracy and relevance in contract analysis.
Our AI-powered litigation tools open up a dialogue with your data, so your legal team can focus on what they do best: thinking.
Beagle is transforming how law firms, corporate legal teams, and eDiscovery service providers handle document review and eDiscovery. Our AI-powered platform delivers faster, more accurate results, streamlining processes and reducing costs to help you uncover key data quickly and efficiently.
Casetext is a legal research platform that uses artificial intelligence to help lawyers and legal professionals find relevant case law, statutes, and other legal materials efficiently. It was particularly known for its AI-powered tool, CARA (Case Analysis Research Assistant), which allowed users to upload legal documents and receive highly relevant case law recommendations.
This in-depth analysis by Chad A. Rutkowski of Baker & Hostetler unpacks the February 2025 Thomson Reuters v. Ross Intelligence decision, where the Delaware court granted summary judgment for Thomson Reuters—holding that Ross’s use of Westlaw headnotes to train its AI tool did not qualify as fair use. The court made clear that the training was commercial, non-transformative, and created a market substitute, emphasizing that even “intermediate” copying without substantial transformation can fail fair use protections. This ruling matters deeply to legal professionals: it sets a critical precedent that AI developers must carefully secure proper licenses for copyrighted training data, with fair use defenses facing steep judicial scrutiny. With broader implications for generative AI platforms like OpenAI or Meta, this piece is essential reading for IP counsel advising on AI-data compliance and litigation strategy—making it a must-click for forward-thinking practitioners.
This insightful analysis tackles a rarely addressed issue—whether AI trainers can claim copyright ownership over the output they help generate. Rutkowski dives into scenarios like image-generation with Midjourney, questioning if human prompts and curatorial choices elevate trainers into co-authorship roles. This matters for legal professionals advising AI developers and users, as it introduces new contours to authorship, licensing, and ownership arguments that could reshape IP strategies. By spotlighting the nuanced interplay between human input and AI-generated results, the piece urges counsel to proactively clarify rights, licensing, and attribution in training workflows—making it a vital read for forward-thinking IP practitioners.
This in-depth analysis by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Melyssa Eigen, Ned Terrace, Stephanie Thomas, and Mengyi Xu examines Colorado’s proposed amendment to extend its AI governance and risk-management regulations—originally for life insurers—to auto and health insurance providers. Highlighting key updates like bias evaluations, board-level oversight, clear consumer explanation of adverse AI-driven decisions, and mandated human oversight in healthcare determinations, the authors draw attention to near-term compliance deadlines in 2025. It underscores a shift, showing how state regulators are preemptively integrating AI into sector-specific governance, offering legal and compliance teams concrete preparation steps. This matters to legal professionals because it signals growing state-level enforcement of AI accountability in insurance, helping counsel advise insurers on bridging internal policies with emerging regulatory frameworks.
This analysis—authored by Megan K. Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles—breaks down a pivotal February 2025 ruling in Tremblay v. OpenAI, where a U.S. federal court ordered OpenAI to produce its full training dataset for GPT‑4 in a copyright infringement case. The ruling underscores that courts are now treating training data as central to proving direct AI‑related copyright claims, even amidst the tension between discovery obligations and trade‑secret protection. For legal professionals, this marks a significant escalation in e‑discovery strategy: practitioners must now advise AI developers on balancing transparency, litigation readiness, and data security under protective orders. By spotlighting emerging standards for dataset disclosure, the article offers invaluable insight for litigators, in‑house counsel, and compliance teams managing AI‑driven legal risk.
This detailed analysis from Debevoise & Plimpton explores how the European Union’s AI Act intersects with insurance industry practices, offering a risk-tiered framework that directly impacts underwriting, fraud detection, and customer service AI tools. The article contrasts the EU's prescriptive regime with the UK's more flexible, principles-based oversight, helping legal and compliance teams understand diverging global regulatory landscapes. A standout insight is that many insurer AI use cases may fall outside the AI Act’s strictest categories—but existing frameworks like Solvency II, DORA, and the IDD already impose significant governance and transparency expectations. With August 2026 compliance deadlines looming, this piece provides insurers and their counsel with a practical roadmap to prepare for cross-jurisdictional AI oversight.
This analysis from Annie Dulka explores how AI applications in international human rights law are reshaping legal frameworks—from refugee protection and due‐process to surveillance governance. It outlines both innovative benefits (e.g., enhanced monitoring, rapid documentation) and legal risks (e.g., bias in asylum decisions, privacy violations), arguing that robust oversight and principled deployment are essential to align AI tools with international human rights norms. This matters significantly for legal professionals navigating cross-border AI use, as it offers a practical roadmap for integrating AI ethics into treaty interpretation, case law, and compliance mechanisms. Engaging and authoritative, the piece encourages lawyers and policymakers to proactively shape AI deployment in human rights contexts—making it a compelling entry-point for those advising on global AI governance.