XLSCOUT is an SOC2 Type II compliant integrated innovation & patent monetization platform at the forefront of the global innovation and IP industry, harnessing the potential of advanced Al technologies like Large Language Models (LLMs) and Generative Al for idea validation, optimizing ideation, creating high-value patents, and monetizing innovation.
MarqVision is an AI-powered platform that helps brands protect themselves from online counterfeits, unauthorized sales, and other forms of brand infringement across various online platforms.
ScaleIP, formerly known as LicenseLead, is a company that uses AI and IP transaction data to help businesses identify and connect with potential partners for licensing, selling, or collaborating on patents. It aims to streamline the process of finding suitable businesses and individuals for IP-related deals. ScaleIP helps IP teams save time, generate revenue, and make informed patent decisions by identifying and engaging with the most likely IP partners.
Black Hills AI provides automated Intellectual Property legal support services from its offices in the US. Its legal support services include intellectual property docketing, paralegal, proofreading, analytics and annuity management services.
Questel, a company specializing in intellectual property (IP) management and innovation. Questel provides software and services to help businesses manage their IP assets, including patents, trademarks, designs, and copyrights.
Tradespace works with leading innovators to generate, manage, and commercialize their IP portfolios. We are the only platform supporting organizations across the entire innovation cycle, including disclosure collection & evaluation, IP management, analytics and scouting, and commercialization.
This in-depth analysis by Chad A. Rutkowski of Baker & Hostetler unpacks the February 2025 Thomson Reuters v. Ross Intelligence decision, where the Delaware court granted summary judgment for Thomson Reuters—holding that Ross’s use of Westlaw headnotes to train its AI tool did not qualify as fair use. The court made clear that the training was commercial, non-transformative, and created a market substitute, emphasizing that even “intermediate” copying without substantial transformation can fail fair use protections. This ruling matters deeply to legal professionals: it sets a critical precedent that AI developers must carefully secure proper licenses for copyrighted training data, with fair use defenses facing steep judicial scrutiny. With broader implications for generative AI platforms like OpenAI or Meta, this piece is essential reading for IP counsel advising on AI-data compliance and litigation strategy—making it a must-click for forward-thinking practitioners.
This insightful analysis tackles a rarely addressed issue—whether AI trainers can claim copyright ownership over the output they help generate. Rutkowski dives into scenarios like image-generation with Midjourney, questioning if human prompts and curatorial choices elevate trainers into co-authorship roles. This matters for legal professionals advising AI developers and users, as it introduces new contours to authorship, licensing, and ownership arguments that could reshape IP strategies. By spotlighting the nuanced interplay between human input and AI-generated results, the piece urges counsel to proactively clarify rights, licensing, and attribution in training workflows—making it a vital read for forward-thinking IP practitioners.
This in-depth analysis by Eric Dinallo, Avi Gesser, Matt Kelly, Samuel J. Allaman, Melyssa Eigen, Ned Terrace, Stephanie Thomas, and Mengyi Xu examines Colorado’s proposed amendment to extend its AI governance and risk-management regulations—originally for life insurers—to auto and health insurance providers. Highlighting key updates like bias evaluations, board-level oversight, clear consumer explanation of adverse AI-driven decisions, and mandated human oversight in healthcare determinations, the authors draw attention to near-term compliance deadlines in 2025. It underscores a shift, showing how state regulators are preemptively integrating AI into sector-specific governance, offering legal and compliance teams concrete preparation steps. This matters to legal professionals because it signals growing state-level enforcement of AI accountability in insurance, helping counsel advise insurers on bridging internal policies with emerging regulatory frameworks.
This analysis—authored by Megan K. Bannigan, Christopher S. Ford, Samuel J. Allaman, and Abigail Liles—breaks down a pivotal February 2025 ruling in Tremblay v. OpenAI, where a U.S. federal court ordered OpenAI to produce its full training dataset for GPT‑4 in a copyright infringement case. The ruling underscores that courts are now treating training data as central to proving direct AI‑related copyright claims, even amidst the tension between discovery obligations and trade‑secret protection. For legal professionals, this marks a significant escalation in e‑discovery strategy: practitioners must now advise AI developers on balancing transparency, litigation readiness, and data security under protective orders. By spotlighting emerging standards for dataset disclosure, the article offers invaluable insight for litigators, in‑house counsel, and compliance teams managing AI‑driven legal risk.
This detailed analysis from Debevoise & Plimpton explores how the European Union’s AI Act intersects with insurance industry practices, offering a risk-tiered framework that directly impacts underwriting, fraud detection, and customer service AI tools. The article contrasts the EU's prescriptive regime with the UK's more flexible, principles-based oversight, helping legal and compliance teams understand diverging global regulatory landscapes. A standout insight is that many insurer AI use cases may fall outside the AI Act’s strictest categories—but existing frameworks like Solvency II, DORA, and the IDD already impose significant governance and transparency expectations. With August 2026 compliance deadlines looming, this piece provides insurers and their counsel with a practical roadmap to prepare for cross-jurisdictional AI oversight.
This analysis from Annie Dulka explores how AI applications in international human rights law are reshaping legal frameworks—from refugee protection and due‐process to surveillance governance. It outlines both innovative benefits (e.g., enhanced monitoring, rapid documentation) and legal risks (e.g., bias in asylum decisions, privacy violations), arguing that robust oversight and principled deployment are essential to align AI tools with international human rights norms. This matters significantly for legal professionals navigating cross-border AI use, as it offers a practical roadmap for integrating AI ethics into treaty interpretation, case law, and compliance mechanisms. Engaging and authoritative, the piece encourages lawyers and policymakers to proactively shape AI deployment in human rights contexts—making it a compelling entry-point for those advising on global AI governance.