training junior staff on AI tools — what approach is working for your firm?
we hired 2 grad accountants this year and they both arrived already using ChatGPT for everything. great enthusiasm, terrible judgment about when to trust the output.
example: one of them used ChatGPT to draft a BAS lodgement note and it confidently stated the GST-free threshold for small suppliers is $75,000. correct! but it applied it to a client who's voluntarily registered — completely irrelevant to their situation. the grad didn't catch it because the number was "right."
so i'm building an internal training program. current draft:
- week 1-2: use AI for research only, no client-facing output. learn what it's good at (finding references, explaining concepts) vs bad at (current-year numbers, judgment calls)
- week 3-4: use AI to generate draft working papers, but must annotate every AI-generated number with the source they verified it against
- month 2+: full AI-assisted workflow but with mandatory QA checklist (see anna's thread)
the hard part: teaching them that AI makes them FASTER at work they already understand, not a shortcut past understanding. if you can't spot the error, the tool is dangerous.
how are other firms handling this? especially interested in firms with more than 5 people.
4 replies
we're a 12-person firm. our approach:
- new hires do their first 10 returns FULLY MANUAL. no AI at all. they need to understand the mechanics before they can evaluate AI output
- after that, they can use AI but must highlight every cell that came from an agent in yellow. partner reviews the yellow cells specifically
- after 50 returns: full discretion, but QA checklist still required
the "yellow cell" rule sounds tedious but it's been incredibly effective. makes them conscious of what's AI-generated vs their own work.
solo practitioner here so i can't speak to managing staff, but i'll say this: the biggest risk isn't that juniors use AI. it's that they use AI without understanding what a WRONG answer looks like.
i've seen fresh grads accept an AI answer that any experienced practitioner would immediately flag as suspicious. the pattern-matching that comes from experience is exactly what juniors lack, and AI doesn't develop it.
we do something similar in our Berlin office. Referendare (trainee tax advisors) must pass an "AI literacy" module before they get access to any AI tools for client work. it covers:
- how LLMs work (enough to understand hallucination risk)
- when to use AI vs when to use the Haufe/NWB commentary
- our firm's QA checklist
- data privacy rules under DSGVO
it's a half-day session. the younger staff actually appreciate the structure — they KNOW they're overconfident and want guardrails.
the UK training contract framework doesn't address AI at all yet. ICAEW is "considering guidance." in the meantime we're all making it up.
my approach with trainees: pair programming style. they sit next to me while i use AI on a real return, talking through WHY i accept or reject each output. takes longer initially but they learn the critical thinking faster than any formal training.
Sign in as a verified accountant to reply.
Sign in