Huski.ai is a company that leverages AI to assist IP lawyers and brand professionals with trademark clearance, watching, and enforcement. It aims to streamline brand protection and growth using cutting-edge AI technology.
PatSnap, a company specializing in innovation intelligence and patent analytics. PatSnap, founded in 2007 and headquartered in Beijing, offers an AI-powered platform that assists various industries in the ideation to commercialization process. The platform analyzes patents, R&D insights, and competitive landscapes. PatSnap's technology helps innovation professionals uncover emerging trends, identify risks, and find opportunities.
IPRally is a company specializing in AI-driven patent search and analysis tools. It offers a web application that uses knowledge graphs and supervised deep learning AI to provide semantic and technical understanding of patent literature. The company aims to increase the productivity of inventors and patent professionals by offering a search tool that functions like a patent expert.
EvenUp is a venture-backed generative AI startup that focuses on ensuring injury victims receive the full value of their claims. It achieves this by using AI to analyze medical documents and case files, turning them into comprehensive demand packages for injury lawyers. EvenUp aims to provide equal access to justice in personal injury cases, regardless of a person's background, income, or access to quality representation.
Harvey is a suite of AI tools designed for legal professionals, offering solutions for drafting, research, and document analysis. Developed by experts in artificial intelligence, Harvey utilizes advanced natural language processing to assist legal experts in their work.
Canarie is developing a compliance platform that uses AI and ML to automate the creation, review, and revision of disclosures and policies for financial institutions.
This Perspective by Ben Chester Cheong (Singapore University of Social Sciences & Cambridge) offers a comprehensive legal–ethical review of transparency and accountability challenges in AI systems governing human wellbeing . It structures the discussion into four pillars—technical explainability methods, regulatory frameworks, ethical safeguards, and multi‑stakeholder collaboration—highlighting how each area plays a vital role in ensuring trust and societal resilience. Legal professionals will appreciate its actionable framework that bridges tech, ethics, and governance, making it a timely resource amid emerging regulations like GDPR “right to explanation” and EU AI Act mandates. By offering strategic clarity and policy cohesion, this article equips lawyers, compliance leaders, and policymakers with tools to embed transparency and accountability into AI systems that shape lives—making it a must‑read for anyone advising on responsible AI deployment.
This essay by Margot E. Kaminski and Meg Leta Jones explores how current legal frameworks actively construct AI-generated speech, rather than being passively disrupted by it. It introduces the “legal construction of technology” method—analyzing how laws like the First Amendment, content moderation, risk regulation, and consumer protection actively interpret and shape AI speech. This analysis matters to legal professionals because it reveals that existing institutions and norms already provide structured pathways for meaningful oversight, shifting the conversation from reactive problem-solving to proactive values-based policy design. By demonstrating that law and AI co-evolve through these intentional constructions, this piece empowers lawyers and policymakers to craft more effective, principled governance—prompting deeper engagement with the field.
This article by Graham H. Ryan analyzes how generative AI challenges the legal immunity conferred by Section 230 of the Communications Decency Act—and why that protection may crumble under new judicial scrutiny. Ryan argues that generative AI systems “create or develop content” and thus likely fall outside Section 230’s existing scope, exposing providers to increased liability for design decisions and algorithmic contributions. It matters for legal professionals because emerging case law may redefine liability standards—from co-authoring content to design-based claims—signaling a pivotal shift in AI governance and internet law that practitioners need to monitor closely. By framing generative AI as a catalyst for reevaluating the legal foundations of internet speech, the article urges lawyers to proactively reassess risk strategies and regulatory compliance in this evolving landscape.
This insightful analysis explores how China’s August 15, 2023 regulations on generative AI reflect a strategic choice to slow AI progress for social and political control. It reveals a key finding that Beijing’s cautious regulatory approach contrasts sharply with the innovation-first strategies of the U.S. and EU, granting other jurisdictions essential breathing room to develop responsible AI policies. Legal professionals will find this article timely and compelling, as it provides practical insight into how geopolitical AI maneuvering reshapes cross-border legal strategy, compliance, and tech governance. By positioning China’s policy as a global pivot point, this piece equips lawyers and policymakers with a nuanced understanding of how AI regulation is being shaped on the international stage—prompting further investigation and dialogue.
This analysis by Matt Blaszczyk, Geoffrey McGovern, and Karlyn D. Stanley explores how U.S. copyright law grapples with AI-generated content, addressing whether AI-assisted works qualify for protection and how training datasets are treated under both U.S. and EU doctrines. It highlights a key insight: U.S. law requires human “authorial contribution” for copyrightability, while the EU allows rights holders to challenge commercial AI training—underscoring a growing legal divide. Timely hearings in Congress and updates from the U.S. Copyright Office make this discussion urgent for legal professionals managing copyright risks in generative AI projects. The analysis empowers practitioners with a clear roadmap for navigating emerging policy debates, licensing strategies, and litigation landscapes around AI-authored content—making it essential reading for IP counsel advising on innovation-driven initiatives.
This expert analysis from Dentons partners Jennifer Cass, Anna Copeman, Sam Caunt, and David Wagget examines the unresolved IP challenges arising from generative AI in 2024 and the legal “cliffhangers” heading into 2025. It highlights key issues like copyright infringement during AI training, ownership of AI-generated works and inventions, and emerging litigation—such as Getty Images v. Stability AI. Legal professionals will find value in its forward-looking take on 2025 reforms, including licensing trends, contractual risk strategies, and pending court rulings. Written with practical insight, this piece equips lawyers with proactive tools to guide clients through rapidly evolving AI‑IP terrain.