Huski.ai is a company that leverages AI to assist IP lawyers and brand professionals with trademark clearance, watching, and enforcement. It aims to streamline brand protection and growth using cutting-edge AI technology.
PatSnap, a company specializing in innovation intelligence and patent analytics. PatSnap, founded in 2007 and headquartered in Beijing, offers an AI-powered platform that assists various industries in the ideation to commercialization process. The platform analyzes patents, R&D insights, and competitive landscapes. PatSnap's technology helps innovation professionals uncover emerging trends, identify risks, and find opportunities.
IPRally is a company specializing in AI-driven patent search and analysis tools. It offers a web application that uses knowledge graphs and supervised deep learning AI to provide semantic and technical understanding of patent literature. The company aims to increase the productivity of inventors and patent professionals by offering a search tool that functions like a patent expert.
EvenUp is a venture-backed generative AI startup that focuses on ensuring injury victims receive the full value of their claims. It achieves this by using AI to analyze medical documents and case files, turning them into comprehensive demand packages for injury lawyers. EvenUp aims to provide equal access to justice in personal injury cases, regardless of a person's background, income, or access to quality representation.
Harvey is a suite of AI tools designed for legal professionals, offering solutions for drafting, research, and document analysis. Developed by experts in artificial intelligence, Harvey utilizes advanced natural language processing to assist legal experts in their work.
Canarie is developing a compliance platform that uses AI and ML to automate the creation, review, and revision of disclosures and policies for financial institutions.
This Harvard Journal of Law & Technology digest makes a bold case for the U.S. and its allies to shift from abstract AI policy debates to a concrete “Chips for Peace” framework—tying frontier-AI regulation, export controls, and benefit-sharing to prevent high-stakes misuse. It outlines three strategic pillars—catastrophe prevention, equitable prosperity, and coordinated governance—to leverage AI chip supply chains for global stability. Legal professionals and policymakers will find this compelling: it transforms chip controls into a diplomatic lever and calls for enforceable standards that go beyond national borders. This piece matters because it translates geopolitical tensions into a proactive legal strategy with real-world tools—click through to explore how your firm can navigate and shape the next generation of AI governance.
This insightful analysis by Megan Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles examines a pivotal February 2025 Delaware District Court ruling in Thomson Reuters v. ROSS Intelligence, where the court granted summary judgment for Thomson Reuters on direct copyright infringement and rejected ROSS’s fair use defense. A key takeaway is that courts may treat unlicensed training of AI models on copyrighted works as non-transformative, with commercial use and market harm outweighing internal-only access arguments. Timing matters: this decision offers a strong early signal that fair use defenses in AI-focused copyright litigation may face steep judicial scrutiny. For IP counsel and AI developers, the article offers actionable clarity on how fair use factors—especially purpose, transformation, and market impact—are being applied in emerging AI cases, making it essential reading for legal professionals navigating generative AI risk.
This Harvard Journal of Law & Technology digest argues that the Federal Circuit should adopt USPTO’s subject‑matter guidance to resolve the nagging inconsistencies from Alice in determining abstract‑idea patent eligibility under 35 U.S.C. § 101. It explains how courts currently struggle with a unified test—highlighting the fragmented judicial analyses and urging systematic classification of abstract‑idea categories at Step One . By leaning on PTO guidance for “groupings of subject matter” and “practical application” tests, the piece shows how the Federal Circuit could enhance predictability in § 101 decisions. IP practitioners and patent litigators will find this argument compelling and practical—it offers a clear pathway to untangle abstract‑idea ambiguity and fortify drafting and litigation strategies.
The article outlines the NYDFS’s October 16, 2024 Industry Letter, which leverages existing 23 NYCRR Part 500 frameworks to guide financial institutions on managing cybersecurity risks tied to AI—including deepfake-enabled social engineering, third-party vendor risk, and AI-as-threat vector. It emphasizes how firms should integrate AI-specific controls—like deepfake-resistant MFA, annual AI‑risk assessments, vendor due‑diligence, and AI‑awareness training—without introducing new regulations . Legal professionals and compliance teams will find this essential for updating governance frameworks, tightening vendor contracts, and ensuring regulatory adherence. Click through for a practical roadmap on aligning your cybersecurity programs with evolving AI‑driven threats.
Yale Journal of Law & Technology’s article delves into the ethical and bias challenges in AI‑powered legal tools, dissecting how human inputs—from data selection to prompting—profoundly shape AI outputs. It spotlights the tension between efficiency gains and the risk of automated errors, offering legal professionals a roadmap to evaluate when and how to retain human oversight. This piece matters because it equips lawyers and compliance teams with actionable insights to design fair, defensible AI workflows. Dive into the full analysis to understand the mechanics driving bias and how to implement guardrails that uphold integrity and trust.
This article highlights a “landmark” November 2023 ruling by the Beijing Internet Court, which for the first time confirmed that AI‑generated images—specifically those produced via Stable Diffusion with creative prompting and refinement—can qualify for copyright under Chinese law. It explains how the court analysed key concepts like “originality” and “intellectual achievement,” crediting the prompt engineer—not the AI model—as the true author. The ruling signals China’s judicial readiness to grapple with AI‑driven creativity and sets an actionable precedent for IP ownership in generative AI cases. Legal professionals will find its deep dive into authorship criteria especially relevant. Click through to explore the court’s detailed reasoning and its implications for future AI‑related IP strategy.