ARTIFICIAL INTELLIGENCE, LEGAL PERSONHOOD, AND DETERMINATION OF CRIMINAL LIABILITY
AUTHOR – MS. SETIKA PRIYAM* & DR. KUNVAR DUSHYANT SINGH**
* STUDENT AT AMITY LAW SCHOOL, AUUP, LUCKNOW
** ASSISTANT PROFESSOR AT AMITY LAW SCHOOL, AUUP, LUCKNOW
BEST CITATION – MS. SETIKA PRIYAM & DR. KUNVAR DUSHYANT SINGH, ARTIFICIAL INTELLIGENCE, LEGAL PERSONHOOD, AND DETERMINATION OF CRIMINAL LIABILITY, INDIAN JOURNAL OF LEGAL REVIEW (IJLR), 5 (7) OF 2025, PG. 212-222, APIS – 3920 – 0001 & ISSN – 2583-2344
ABSTRACT
The broad adoption of artificial intelligence (AI) across vital domains ranging from autonomous vehicles and financial markets to healthcare diagnostics and legal analytics has exposed significant gaps in our legal systems when AI-driven errors or malfunctions cause harm. Autonomous systems often involve multiple stakeholder hardware suppliers, software developers, sensor manufacturers, and corporate overseers making it difficult to pinpoint who is responsible for a system’s failure. The 2018 Uber autonomous‑vehicle crash in Tempe, Arizona, where a pedestrian was misclassified repeatedly by the AI’s perception module and the emergency braking function was disabled, underscores this challenge: with safety overrides turned off and state oversight minimal, liability became entangled among engineers, operators, and corporate policy not the machine alone.
Traditional criminal law doctrines rest on actus reus (the guilty act) and mens rea (the guilty mind), both premised on human agency and intent. AI entities, however, can execute complex decision‑making without consciousness or moral awareness, creating a “responsibility gap” under current frameworks. To bridge this gap, scholars like Gabriel Hallevy have proposed three liability models—perpetration‑via‑another (holding programmers or users accountable), the natural‑probable‑consequence model (liability for foreseeable harms), and direct liability (attributing responsibility to AI itself if it meets legal thresholds for actus reus and an analogue of mens rea). Each model offers insight but struggles with AI’s semi‑autonomous nature and opacity.
This paper argues against prematurely conferring legal personhood on AI an approach that risks absolving human actors and diluting accountability. Instead, it advocates for a human‑centric policy framework that combines clear oversight duties, mandated explainability measures, and calibrated negligence or strict‑liability standards for high‑risk AI applications. Such reforms are especially urgent in jurisdictions like India, where AI governance remains nascent. By anchoring liability in human oversight and regulatory clarity rather than on machines themselves, we can ensure that accountability evolves in step with AI’s growing capabilities, safeguarding both innovation and public safety.
Keywords: Artificial Intelligence, Criminal Liability, Legal Personhood, Actus Reus, Mens Rea, Vicarious Liability, AI Regulation