Huski.ai is a company that leverages AI to assist IP lawyers and brand professionals with trademark clearance, watching, and enforcement. It aims to streamline brand protection and growth using cutting-edge AI technology.
PatSnap, a company specializing in innovation intelligence and patent analytics. PatSnap, founded in 2007 and headquartered in Beijing, offers an AI-powered platform that assists various industries in the ideation to commercialization process. The platform analyzes patents, R&D insights, and competitive landscapes. PatSnap's technology helps innovation professionals uncover emerging trends, identify risks, and find opportunities.
IPRally is a company specializing in AI-driven patent search and analysis tools. It offers a web application that uses knowledge graphs and supervised deep learning AI to provide semantic and technical understanding of patent literature. The company aims to increase the productivity of inventors and patent professionals by offering a search tool that functions like a patent expert.
EvenUp is a venture-backed generative AI startup that focuses on ensuring injury victims receive the full value of their claims. It achieves this by using AI to analyze medical documents and case files, turning them into comprehensive demand packages for injury lawyers. EvenUp aims to provide equal access to justice in personal injury cases, regardless of a person's background, income, or access to quality representation.
Harvey is a suite of AI tools designed for legal professionals, offering solutions for drafting, research, and document analysis. Developed by experts in artificial intelligence, Harvey utilizes advanced natural language processing to assist legal experts in their work.
Canarie is developing a compliance platform that uses AI and ML to automate the creation, review, and revision of disclosures and policies for financial institutions.
This report reflects on the evolution of AI-related litigation in 2024, detailing how plaintiffs diversified beyond copyright claims into new fronts like trademark dilution, false advertising, right of publicity, and unfair competition. It emphasizes a key insight: while early copyright lawsuits faltered, savvy plaintiffs are adapting strategies and naming broader defendant classes—signaling a sharp uptick in legal complexity for AI developers. This trend matters to legal professionals because it marks a shift from isolated disputes to systemic risk exposure, making proactive counsel and strategic defense critical as new cases continue to roll in. Offering forward-looking clarity on forthcoming U.S. Copyright Office guidance and potential court rulings, the piece equips IP litigators and in-house counsel with practical foresight to navigate 2025’s AI‑driven legal terrain.
This article by Zach Harned, Matthew P. Lungren, and Pranav Rajpurkar explores the critical intersection of machine-vision AI in medical imaging and how it complicates malpractice liability. It reveals that AI's interpretability and diagnostic accuracy could reduce physician liability, while also raising fresh questions for manufacturers under product‑liability and “learned intermediary” doctrines. For legal professionals, this matters because it spotlights evolving standards of care, regulatory classification by the FDA, and strategic liability planning in healthcare AI deployment. The piece delivers actionable insight into balancing innovation and patient safety, prompting practitioners to reassess advice to clients in the fast-evolving medical‑AI landscape.
This article delivers a comprehensive set of consumer‑centric principles for governing AI personal assistants, emphasizing how clear terms of service, transparent data use, and specified delegation boundaries empower users and shape responsible AI deployment. It outlines critical protections—like explicit privacy terms, opt‑out training clauses, and liability limits—that matter deeply to legal professionals advising on AI‑driven user interfaces and compliance. By spotlighting real‑world risks—such as privacy erosion, unauthorized spending, and overreach of autonomous agents—the piece drives home why robust contract design and regulatory alignment are essential now. With actionable clarity and legal foresight, the article urges practitioners to draft AI terms that safeguard consumer rights while fostering innovation.
This forward-looking analysis explores how RAG combines external document retrieval with LLM generation to dramatically reduce hallucinations and enhance factual grounding in legal tasks. Johnston spotlights a November 2024 randomized trial showing that while GPT‑4 speeded up legal work, it didn’t improve accuracy—suggesting RAG’s retrieval layer offers the key breakthrough. This matters for legal professionals because it shows a tangible path to reliable, citation-capable AI tools, built on verifiable sources like statutes and case law. By demonstrating that RAG-equipped systems can elevate LLMs from flashy assistants to trusted research partners, the article invites lawyers and legal tech developers to rethink how they deploy AI in practice.
This digest piece proposes a framework for government‑mandated AI audits, drawing on financial auditing standards to ensure transparency across an AI system’s full lifecycle—from data and model development to deployment. It emphasizes that professional, standards‑based oversight can foster public trust and accountability without stifling innovation, turning audits into drivers of advancement rather than burdensome compliance. Legal professionals will find it essential for understanding what credible, institutionalized AI governance could look like and how regulators may begin enforcing it. By offering actionable reforms and highlighting the role of independent auditors, this article equips lawyers and policymakers with practical guidance to shape and prepare for the next phase of AI regulation.
This press release from the FTC unveils Operation AI Comply, targeting five companies (including DoNotPay and Rytr) for deceptive AI claims—such as fake legal services and AI-driven e-commerce schemes. It highlights that the FTC enforces existing law: misusing AI tools to deceive consumers or generate fake reviews is illegal. This matters for legal professionals because it marks a sharp pivot toward proactive, AI-focused consumer protection, illustrating how liability and enforcement frameworks are evolving in the AI era. With tangible outcomes—fines, cease-and-desist orders, and court actions—this release equips lawyers with critical insights on how to counsel clients offering AI-powered services.