Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes. I asked Chat to review website terms and look for any differences between the terms on the site and the document I uploaded to it. When it identified all sorts of non-issues between the documents, I got concerned. So, I asked it to review the provision in each document on “AI hallucinations” (which did not exist in either document). Chat simply “made up” a provision in the website terms, reproduced it for me, and recommended I edit the document to add it. It was absolutely sure that this appeared on the web version. had me so convinced that I scrolled the Terms page twice just to make sure I wasn’t the crazy one.
reddit AI Harm Incident 1747012683.0 ♥ 11
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mrtgd8d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_mrubyeu","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"rdc_mrtafeh","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},{"id":"rdc_mrulpjd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mrtcjne","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]