Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've experienced this first hand with ChatGPT for medical information, or anything else for that matter. It tends to hallucinate and give you contraindicating information. When you press or correct it, it sort of admits the mistake but glosses over the correction further down the conversation. Luckily I do my due diligence with everything ChatGPT presents as fact.
youtube AI Harm Incident 2025-11-25T06:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyBT9poAAMTZCikqcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgUe6Zwi3KFYjq-4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwp6itWhN9NK_yJWU14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxLTecsnkYpLpPn0rF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgylpSPoKXckh-4WczZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwzQV3gRkI-5pjk4Nh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzQwY7JjRucGYFY1bJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzRnD2Me5GfRxcS0nR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},{"id":"ytc_Ugx5KvVz2ofbLUET79p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx-sYy84YtMcKXJ_tN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]