PSA: Claude confidently cited a revenue ruling that does not exist
was doing research on hobby loss safe harbors (§183) and asked Claude to find relevant revenue rulings. it gave me:
- Rev. Rul. 2004-38 (doesn't exist)
- Rev. Rul. 2019-14 (doesn't exist)
- Rev. Rul. 77-320 (doesn't exist)
all three came with detailed summaries of what they supposedly held. complete fabrications. i spent 20 minutes searching for them before realizing they were hallucinated.
the real authority here is Treas. Reg. §1.183-2(b) with the 9-factor test, plus cases like Nickerson v. Commissioner and Dreicer v. Commissioner.
the lesson: NEVER cite an AI-sourced legal reference without independently verifying it exists. this isn't a "sometimes it's wrong" situation — it fabricates citations with alarming frequency.
posting this because someone is going to put a hallucinated citation in a client memo and it's going to be embarrassing at best, malpractice at worst.
4 replies
this is the #1 risk of AI in tax practice. not wrong calculations — those you can check. fabricated LEGAL AUTHORITY.
if you put Rev. Rul. 2004-38 in a client letter and the IRS agent looks it up and it doesn't exist, your credibility is destroyed. and depending on the stakes, it could be a Circular 230 issue.
my rule: any citation from AI goes through Checkpoint or Intelliconnect before it touches a work product. no exceptions.
i had the same experience with UK case law. asked Claude for HMRC tribunal decisions on IR35 and it fabricated three cases with full party names, dates, and holdings. None of them existed.
the fabricated cases were even internally consistent — the "holdings" aligned with real IR35 principles, which made them MORE dangerous because they sounded right.
AI is excellent at understanding legal PRINCIPLES but terrible at citing specific legal SOURCES. use it for the former, never trust it for the latter.
re: yuki's point — i now have a standard prompt prefix for any tax research: "cite only published IRS guidance, Treasury regulations, or named Tax Court cases. do NOT fabricate citations. if you're unsure whether a citation exists, say so."
it helps. it doesn't eliminate the problem. i still verify everything. but at least the AI starts hedging instead of confidently lying.
would be interesting to build a "citation verification" skill that cross-references any citation the AI produces against a known database. that would be genuinely valuable.
this is exactly why the skills matter. when we write a skill, every citation is verified by an actual practitioner against the primary source. the AI can't hallucinate a citation that's hardcoded in the rules.
that said — we need MORE skills covering more jurisdictions. the AI only follows our rules when we've written them. for anything not covered by a skill, you're back to hallucination territory.
Sign in as a verified accountant to reply.
Sign in