NLPatent is an industry leading AI-based patent search and analytics platform trusted by Fortune 500 companies, Am Law 100 firms, and research universities around the world. The platform takes an AI-first approach to patent search; it's built from a proprietary Large Language Model trained on patent data to truly understand the language of patents and innovation.
PQAI stands for Patent Quality Artificial Intelligence. It is a free, open-source, natural language-based patent search platform developed by AT&T and the Georgia Intellectual Property Alliance. PQAI is designed as a collaborative initiative to build a shared AI-based tool for prior art searching.
Solve Intelligence is an AI-powered platform designed for intellectual property legal professionals, specializing in streamlining the patenting process. Founded in 2023 and based in San Francisco, the company develops AI tools specifically for patent attorneys, focusing on user-centric design and practical application.
Amplified AI is an intellectual property (IP) technology company offering AI-powered search and collaboration tools. It helps researchers and innovators research, document, and share technical intelligence within their teams by organizing and curating global patent and scientific information.
Ambercite AI is a patent search tool that utilizes artificial intelligence (AI) and network analytics to identify patents similar to a given set of starting patents. It differs from traditional patent searching methods that rely on keywords and patent class codes by using citation patterns, patent text, and metadata to find relevant patents and reduce false positives.
PatentPal is an AI-powered platform designed to streamline the patent drafting process for legal professionals. It utilizes generative AI to automate the creation of patent applications, including generating descriptions, figures, and supporting documents from a set of claims. PatentPal aims to save time for patent attorneys and agents, allowing them to focus on higher-value aspects of their work. It can export drafts into formats like Word, Visio, or PowerPoint.
Cardozo Law Review's empirical research demonstrates how AI hiring algorithms trained on predominantly male datasets systematically replicate gender bias, as seen in Amazon's algorithm that downgraded women candidates. The analysis reveals fundamental measurement challenges in employment AI unlike medical AI, where researchers cannot easily determine if rejected female candidates would outperform hired males. This academic study exposes the technical limitations of bias auditing in hiring contexts and calls for structural reforms to prevent AI from codifying historical workplace discrimination.
Comprehensive analysis of 13 global AI laws reveals unprecedented regulatory activity with U.S. states introducing 400+ AI bills in 2024, six times more than 2023, while the EU AI Act creates binding requirements for high-risk hiring systems. The research highlights critical compliance challenges as NYC's bias audit requirements, Colorado's impact assessments, and India's anti-discrimination mandates create a complex patchwork of overlapping obligations. HR professionals must navigate ADA accommodations, Title VII compliance, and emerging state-specific AI regulations while ensuring algorithmic fairness across diverse jurisdictions.
Oxford Journal's research reveals how AI developers have become increasingly secretive about training datasets as copyright litigation intensifies, prompting global calls for mandatory transparency requirements. The analysis examines the EU AI Act's groundbreaking training data disclosure mandates and G7 principles requiring transparency to protect intellectual property rights. This scholarly assessment demonstrates how transparency obligations could enable rightsholder enforcement while balancing innovation needs, offering a potential regulatory solution to the copyright-AI training data conflict.
Civil rights firm's analysis exposes how AI bias in hiring systematically discriminates against marginalized groups, with nearly 80% of employers now using AI recruitment tools despite documented gender and racial discrimination like Amazon's scrapped recruiting engine. The EEOC's new initiative to combat algorithmic discrimination reflects mounting legal challenges as biased datasets perpetuate workplace inequality across healthcare, employment, and lending. This practitioner perspective emphasizes the urgent need for human oversight and ethical AI frameworks to prevent civil rights violations in an increasingly automated hiring landscape.
USC's legal analysis explores landmark AI copyright litigation including Authors Guild v. OpenAI and NYT v. Microsoft, where publishers claim AI training violates copyright through unauthorized use of millions of articles. The piece contrasts China's progressive stance recognizing AI-generated content copyright with the U.S.'s unresolved fair use debates, highlighting how courts must balance AI innovation against creator rights. As proposed federal legislation like the Generative AI Copyright Disclosure Act advances, this analysis illuminates the critical legal battles shaping AI's future in creative industries.
The EU AI Act becomes enforceable law spanning 180 recitals and 113 articles, imposing maximum penalties of €35 million or 7% of worldwide annual turnover for non-compliance. The regulation's phased implementation begins with prohibited AI practices in February 2025, followed by transparency requirements for general-purpose AI models and full enforcement by August 2026. This comprehensive framework establishes the legal foundation for AI governance across all 27 EU member states, creating immediate compliance obligations for any organization deploying AI systems that impact EU markets.