Huski.ai is a company that leverages AI to assist IP lawyers and brand professionals with trademark clearance, watching, and enforcement. It aims to streamline brand protection and growth using cutting-edge AI technology.
PatSnap, a company specializing in innovation intelligence and patent analytics. PatSnap, founded in 2007 and headquartered in Beijing, offers an AI-powered platform that assists various industries in the ideation to commercialization process. The platform analyzes patents, R&D insights, and competitive landscapes. PatSnap's technology helps innovation professionals uncover emerging trends, identify risks, and find opportunities.
IPRally is a company specializing in AI-driven patent search and analysis tools. It offers a web application that uses knowledge graphs and supervised deep learning AI to provide semantic and technical understanding of patent literature. The company aims to increase the productivity of inventors and patent professionals by offering a search tool that functions like a patent expert.
EvenUp is a venture-backed generative AI startup that focuses on ensuring injury victims receive the full value of their claims. It achieves this by using AI to analyze medical documents and case files, turning them into comprehensive demand packages for injury lawyers. EvenUp aims to provide equal access to justice in personal injury cases, regardless of a person's background, income, or access to quality representation.
Harvey is a suite of AI tools designed for legal professionals, offering solutions for drafting, research, and document analysis. Developed by experts in artificial intelligence, Harvey utilizes advanced natural language processing to assist legal experts in their work.
Canarie is developing a compliance platform that uses AI and ML to automate the creation, review, and revision of disclosures and policies for financial institutions.
This in-depth analysis by Chad A. Rutkowski of Baker & Hostetler unpacks the February 2025 Thomson Reuters v. Ross Intelligence decision, where the Delaware court granted summary judgment for Thomson Reuters—holding that Ross’s use of Westlaw headnotes to train its AI tool did not qualify as fair use. The court made clear that the training was commercial, non-transformative, and created a market substitute, emphasizing that even “intermediate” copying without substantial transformation can fail fair use protections. This ruling matters deeply to legal professionals: it sets a critical precedent that AI developers must carefully secure proper licenses for copyrighted training data, with fair use defenses facing steep judicial scrutiny. With broader implications for generative AI platforms like OpenAI or Meta, this piece is essential reading for IP counsel advising on AI-data compliance and litigation strategy—making it a must-click for forward-thinking practitioners.
This insightful analysis tackles a rarely addressed issue—whether AI trainers can claim copyright ownership over the output they help generate. Rutkowski dives into scenarios like image-generation with Midjourney, questioning if human prompts and curatorial choices elevate trainers into co-authorship roles. This matters for legal professionals advising AI developers and users, as it introduces new contours to authorship, licensing, and ownership arguments that could reshape IP strategies. By spotlighting the nuanced interplay between human input and AI-generated results, the piece urges counsel to proactively clarify rights, licensing, and attribution in training workflows—making it a vital read for forward-thinking IP practitioners.
This in-depth analysis by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Melyssa Eigen, Ned Terrace, Stephanie Thomas, and Mengyi Xu examines Colorado’s proposed amendment to extend its AI governance and risk-management regulations—originally for life insurers—to auto and health insurance providers. Highlighting key updates like bias evaluations, board-level oversight, clear consumer explanation of adverse AI-driven decisions, and mandated human oversight in healthcare determinations, the authors draw attention to near-term compliance deadlines in 2025. It underscores a shift, showing how state regulators are preemptively integrating AI into sector-specific governance, offering legal and compliance teams concrete preparation steps. This matters to legal professionals because it signals growing state-level enforcement of AI accountability in insurance, helping counsel advise insurers on bridging internal policies with emerging regulatory frameworks.
This analysis—authored by Megan K. Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles—breaks down a pivotal February 2025 ruling in Tremblay v. OpenAI, where a U.S. federal court ordered OpenAI to produce its full training dataset for GPT‑4 in a copyright infringement case. The ruling underscores that courts are now treating training data as central to proving direct AI‑related copyright claims, even amidst the tension between discovery obligations and trade‑secret protection. For legal professionals, this marks a significant escalation in e‑discovery strategy: practitioners must now advise AI developers on balancing transparency, litigation readiness, and data security under protective orders. By spotlighting emerging standards for dataset disclosure, the article offers invaluable insight for litigators, in‑house counsel, and compliance teams managing AI‑driven legal risk.
This detailed analysis from Debevoise & Plimpton explores how the European Union’s AI Act intersects with insurance industry practices, offering a risk-tiered framework that directly impacts underwriting, fraud detection, and customer service AI tools. The article contrasts the EU's prescriptive regime with the UK's more flexible, principles-based oversight, helping legal and compliance teams understand diverging global regulatory landscapes. A standout insight is that many insurer AI use cases may fall outside the AI Act’s strictest categories—but existing frameworks like Solvency II, DORA, and the IDD already impose significant governance and transparency expectations. With August 2026 compliance deadlines looming, this piece provides insurers and their counsel with a practical roadmap to prepare for cross-jurisdictional AI oversight.
This analysis from Annie Dulka explores how AI applications in international human rights law are reshaping legal frameworks—from refugee protection and due‐process to surveillance governance. It outlines both innovative benefits (e.g., enhanced monitoring, rapid documentation) and legal risks (e.g., bias in asylum decisions, privacy violations), arguing that robust oversight and principled deployment are essential to align AI tools with international human rights norms. This matters significantly for legal professionals navigating cross-border AI use, as it offers a practical roadmap for integrating AI ethics into treaty interpretation, case law, and compliance mechanisms. Engaging and authoritative, the piece encourages lawyers and policymakers to proactively shape AI deployment in human rights contexts—making it a compelling entry-point for those advising on global AI governance.