10 reasons: Why ChatGPT & Claude cannot reliably answer legal questions

ChaGPT and Claude or other foundation models alone will never be able to answer your legal questions reliably.
The reasons for this are:
- No legal verification
The answers have not been checked by lawyers and may be professionally incorrect or misleading.
- No access to current legal texts & rulings
Models work with training data, not with constantly updated official sources such as the OR, ZGB or federal court rulings.
- Lack of contextual knowledge of Swiss law
Legal systems are local. Foundation models are often based on US or global law — not on Swiss specifics.
- hallucinations (= wrong answers)
Models “invent” answers when they are unsure — this is highly risky in the legal sector.
- No liability or traceability
There is no legal responsibility or documentation as to how a response came about.
- Not always up to date
The models are not aware of any legislative changes or new rulings based on their level of training (e.g. ChatGPT as of the end of 2023).
- Data protection issues in sensitive cases
For confidential legal issues, no text may simply be entered into a generic AI model.
- Lack of detail and nuances
Legal answers often require differentiated interpretations — models usually issue superficial standard texts.
- No citation or legal basis
Foundation models do not provide direct legislative articles, paragraphs, or court rulings — you don't know what they're based on.
10th Not a substitute for legal advice
What counts in legal contexts is: Reliability, traceability and usability in court. Foundation models don't offer that.