Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is *why* ChatGPT and LLMs in general are so dangerous. They’re just unleashed onto the public as science communicators with no QA testing. Yes, people often need to be told multiple times to not do something stupid. The difference is that a human expert will often *pick up on that*. LLMs don’t have that training and aren’t required to have it.
youtube AI Harm Incident 2025-12-02T23:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzUBcU4DSZbXIQaAfh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwX50B-MrjomoKQGu94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzPatLtsj91sLLXGRV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwA1K7rtMfStX2KXvx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyom-g-AkC3wP68yZJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwfYh-db8SaZy-kVTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwqsqhCWUyOeYKG23Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxTTjflKLyx8ehnflp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyesUj-YTXC6PY1xL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7oZgAZiLDR8qrPmd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"} ]