This insightful analysis by Megan Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles examines a pivotal February 2025 Delaware District Court ruling in Thomson Reuters v. ROSS Intelligence, where the court granted summary judgment for Thomson Reuters on direct copyright infringement and rejected ROSS’s fair use defense. A key takeaway is that courts may treat unlicensed training of AI models on copyrighted works as non-transformative, with commercial use and market harm outweighing internal-only access arguments. Timing matters: this decision offers a strong early signal that fair use defenses in AI-focused copyright litigation may face steep judicial scrutiny. For IP counsel and AI developers, the article offers actionable clarity on how fair use factors—especially purpose, transformation, and market impact—are being applied in emerging AI cases, making it essential reading for legal professionals navigating generative AI risk.
The article outlines the NYDFS’s October 16, 2024 Industry Letter, which leverages existing 23 NYCRR Part 500 frameworks to guide financial institutions on managing cybersecurity risks tied to AI—including deepfake-enabled social engineering, third-party vendor risk, and AI-as-threat vector. It emphasizes how firms should integrate AI-specific controls—like deepfake-resistant MFA, annual AI‑risk assessments, vendor due‑diligence, and AI‑awareness training—without introducing new regulations . Legal professionals and compliance teams will find this essential for updating governance frameworks, tightening vendor contracts, and ensuring regulatory adherence. Click through for a practical roadmap on aligning your cybersecurity programs with evolving AI‑driven threats.
Yale Journal of Law & Technology’s article delves into the ethical and bias challenges in AI‑powered legal tools, dissecting how human inputs—from data selection to prompting—profoundly shape AI outputs. It spotlights the tension between efficiency gains and the risk of automated errors, offering legal professionals a roadmap to evaluate when and how to retain human oversight. This piece matters because it equips lawyers and compliance teams with actionable insights to design fair, defensible AI workflows. Dive into the full analysis to understand the mechanics driving bias and how to implement guardrails that uphold integrity and trust.
This article highlights a “landmark” November 2023 ruling by the Beijing Internet Court, which for the first time confirmed that AI‑generated images—specifically those produced via Stable Diffusion with creative prompting and refinement—can qualify for copyright under Chinese law. It explains how the court analysed key concepts like “originality” and “intellectual achievement,” crediting the prompt engineer—not the AI model—as the true author. The ruling signals China’s judicial readiness to grapple with AI‑driven creativity and sets an actionable precedent for IP ownership in generative AI cases. Legal professionals will find its deep dive into authorship criteria especially relevant. Click through to explore the court’s detailed reasoning and its implications for future AI‑related IP strategy.
This in-depth analysis by Chad A. Rutkowski of Baker & Hostetler unpacks the February 2025 Thomson Reuters v. Ross Intelligence decision, where the Delaware court granted summary judgment for Thomson Reuters—holding that Ross’s use of Westlaw headnotes to train its AI tool did not qualify as fair use. The court made clear that the training was commercial, non-transformative, and created a market substitute, emphasizing that even “intermediate” copying without substantial transformation can fail fair use protections. This ruling matters deeply to legal professionals: it sets a critical precedent that AI developers must carefully secure proper licenses for copyrighted training data, with fair use defenses facing steep judicial scrutiny. With broader implications for generative AI platforms like OpenAI or Meta, this piece is essential reading for IP counsel advising on AI-data compliance and litigation strategy—making it a must-click for forward-thinking practitioners.
This insightful analysis tackles a rarely addressed issue—whether AI trainers can claim copyright ownership over the output they help generate. Rutkowski dives into scenarios like image-generation with Midjourney, questioning if human prompts and curatorial choices elevate trainers into co-authorship roles. This matters for legal professionals advising AI developers and users, as it introduces new contours to authorship, licensing, and ownership arguments that could reshape IP strategies. By spotlighting the nuanced interplay between human input and AI-generated results, the piece urges counsel to proactively clarify rights, licensing, and attribution in training workflows—making it a vital read for forward-thinking IP practitioners.
Explore our collection of 200+ Premium Webflow Templates