Why in the News?
The Supreme Court termed reliance on AI-generated fake case law by a trial court in Andhra Pradesh as “misconduct” and flagged it as an “institutional concern.” The case involved citation of non-existent judgments generated through AI tools, prompting the Court to warn that decisions based on fabricated precedents will attract legal consequences.
What is AI Hallucination?
- Definition: AI hallucination refers to the generation of false, fabricated, or non-existent information by generative AI systems while presenting it in a confident and coherent manner.
- In Legal Context: It includes creation of fake case citations, incorrect statutory references, or imaginary judicial precedents.
- Cause: Occurs because generative AI predicts text patterns probabilistically rather than retrieving verified data from authenticated legal databases.
Role of AI in Judicial Process
- Research Assistance: Supports case-law searches, judgment summarisation, and drafting. Example: The Supreme Court’s AI tool SUPACE (Supreme Court Portal for Assistance in Court’s Efficiency) assists judges by compiling relevant precedents and legal materials for faster research.
- Administrative Efficiency: Facilitates transcription, translation, and document management under the e-Courts Project. Example: The Supreme Court’s SUVAS (Supreme Court Vidhik Anuvaad Software) uses AI-based machine translation to translate judgments into regional languages to enhance accessibility.
- Access to Justice: Expands digital availability of court records and improves procedural transparency. Example: Under the e-Courts Mission Mode Project (Phase III), virtual courts and online filing systems use technology-enabled processes to reduce pendency and improve citizen access.
- Risk Factor and Verification Requirement: Mandates human oversight to prevent reliance on fabricated outputs. Example: The recent Supreme Court observation in the Andhra Pradesh trial court matter highlighted that AI-generated fake citations, if unverified, can amount to misconduct and undermine judicial credibility.
How does AI ‘hallucination’ challenge the integrity of judicial decision-making?
- Predictive Text Model: Generative AI tools such as ChatGPT operate on probabilistic language prediction rather than verified legal databases, leading to fabricated citations.
- Fabricated Case Law: In the Vijayawada trial court case, an AI-generated judgment cited “Subramani v. M. Natarajan (2013) 14 SCC 95,” which did not exist.
- Linguistic Fluency over Accuracy: AI tools prioritise coherent language construction, not factual validation.
- Judicial Consequence: The Supreme Court observed that reliance on fake judgments amounts to “misconduct” and entails legal consequences.
Why did the Supreme Court treat this incident as an ‘institutional concern’ rather than an isolated lapse?
- Systemic Occurrence: The Court noted similar instances of AI-generated “non-existent” judgments across jurisdictions.
- Supreme Court Dismissal (Feb 13, 2026): A Special Leave Petition was dismissed after the petitioner cited non-existent judgments.
- Delhi High Court (Sept 2025): Petition withdrawn after opposing counsel pointed out fabricated precedents.
- Bombay High Court (Jan 2026): Imposed ₹50,000 cost for citing a fake case; noted AI-generated drafting markers such as bullet formats and green-box highlights.
- Judicial Time Wastage: Courts described such reliance as “dumping” unverified material, resulting in waste of judicial time.
What distinguishes ‘error in good faith’ from judicial misconduct in this context?
- High Court Approach: Justice Ravi Nath Tilhari accepted the trial judge’s explanation that AI was used in good faith; refused to set aside the order solely due to erroneous citations.
- Supreme Court’s Position: Held that reliance on fake judgments is not merely an error but misconduct affecting adjudication integrity.
- Legal Threshold: The apex court emphasised accountability where fabricated precedents influence judicial reasoning.
- Institutional Discipline: The Court signaled that judicial officers must independently verify sources before relying on AI outputs.
What regulatory and policy responses have emerged within the judiciary?
- White Paper (Nov 2025): Supreme Court released “Artificial Intelligence and Judiciary,” identifying “fabrication of cases and hallucination” as primary risks.
- Risk Identification: AI may hallucinate judgments, citations, and legislative references that do not exist.
- Ethics Committees Proposal: Recommended establishing AI ethics committees within courts.
- Mandatory Verification: Directed that information obtained through AI tools must be independently verified.
- Kerala High Court (July 2025): Issued first formal AI policy permitting administrative use but mandating meticulous verification of legal citations; warned of disciplinary action.
How does this development reflect the broader tension between technological adoption and constitutional accountability?
- Digital Transformation of Courts: Judiciary increasingly integrates AI for translation, transcription, and research assistance.
- Adjudicatory Legitimacy: Judicial authority derives from constitutional fidelity and precedential accuracy.
- Professional Responsibility: Lawyers and judges remain accountable for submissions irrespective of technological tools used.
- Rule of Law Implication: Fabricated precedents undermine stare decisis and the doctrine of binding precedent under Article 141.
Conclusion
The Supreme Court’s observations underline that technological integration in the judiciary must operate within the framework of constitutional discipline and professional accountability. While AI enhances efficiency, access, and research capacity, it cannot replace judicial reasoning or due diligence. The episode reinforces that the rule of law depends not merely on digital advancement but on verified precedent, ethical responsibility, and institutional integrity.
PYQ Relevance
[UPSC 2023] Introduce the concept of Artificial Intelligence (AI). How does AI help clinical diagnosis? Do you perceive any threat to the privacy of the individual in the use of AI in healthcare?
Linkage: The question links AI’s utility with ethical and regulatory concerns, similar to judicial AI use where efficiency must be balanced with accountability and safeguards. The issue of AI hallucination in courts reflects the same tension between technological assistance and risks to institutional integrity.
Get an IAS/IPS ranker as your 1: 1 personal mentor for UPSC 2024

