Paxton is an innovative legal technology firm transforming the legal landscape. Our vision is to equip legal professionals with an AI assistant that supercharges efficiency, enhances quality, and enables extraordinary results.
Developer of an document review platform designed to help law firms automate the reviewing process and find relevant evidence. The company's platform uses artificial intelligence to find evidence to support clients' cases, instantly view events timelines, autogenerate tags, and auto-categorize documents, helping lawyers to unearth critical evidence, and auto-generate comprehensive timelines.
DocLens.ai is a Software as a Service (SaaS) platform that leverages artificial intelligence (AI) and machine learning (ML) to assist insurance professionals in managing legal risks associated with liability claims and complex document reviews. The platform is designed to process both structured and unstructured data, including various types of documents, to extract critical information and provide actionable insights.
Wexler establishes the facts in any contentious matter, from an internal investigation, to international litigation to an employee grievance. Disputes of any kind rely on a deep understanding of the facts. With Wexler, legal, HR, compliance , forensic accounting and tax teams can quickly understand the facts in any matter, reducing doubt, saving critical time and increasing ROI, through more successful outcomes and fewer written off costs.
DeepJudge is the core AI platform for legal professionals. Powered by world-class enterprise search that serves up immediate access to all of the institutional knowledge in your firm, DeepJudge enables you to build entire AI applications, encapsulate multi-step workflows, and implement LLM agents.
Alexi is the premier AI-powered litigation platform, providing legal teams with high-quality research memos, pinpointing crucial legal issues and arguments, and automating routine litigation tasks.
The new GenAI Profile reflects NIST's recommendations for implementing the risk management principles of the AI RMF specifically with respect to generative AI. This guidance is intended to assist organizations with implementing comprehensive risk management techniques for specific known risks that are unique to or exacerbated by the deployment and use of generative AI applications and systems.
Image-generating technology is accelerating quickly, making it much more likely that you will be seeing "digital replicas" (sometimes referred to as "deepfakes") of celebrities and non-celebrities alike across film, television, documentaries, marketing, advertising, and election materials. Meanwhile, legislators are advocating for protections against the exploitation of name, image, and likeness while attempting to balance the First Amendment rights creatives enjoy.
Aescape offers AI-powered robotic massages via its “Aertable,” which uses real-time feedback and a body scan system to deliver personalized experiences. With features like “Aerpoints” simulating therapist touch and “Aerwear” enhancing accuracy, Aescape addresses the massage industry’s challenges like inconsistency and therapist shortages. While expanding rapidly, it raises legal issues including liability, privacy, licensing, regulation, and IP concerns.
Large language models rely on vast internet-scraped data, raising legal concerns, especially around intellectual property. Many U.S. lawsuits allege IP violations tied to data scraping. An OECD report, Intellectual Property Issues in AI Trained on Scraped Data, examines these challenges and offers guidance for policymakers on addressing legal and policy concerns in AI training.
The integration of artificial intelligence (AI) tools in healthcare is revolutionizing the industry, bringing efficiencies to the practice of medicine and benefits to patients. However, the negotiation of third-party AI tools requires a nuanced understanding of the tool’s application, implementation, risk and the contractual pressure points.
Parents of two Texas children have sued Character Technologies, claiming its chatbot, Character.AI, exposed their kids (ages 17 and 11) to self-harm, violence, and sexual content. Filed by the Social Media Victims Law Center and Tech Justice Law Project, the suit seeks to shut down the platform until safety issues are addressed. It also names the company’s founders, Google, and Alphabet Inc. as defendants.