Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So in this case, the AI wasn't hallucinating, the human just decided to ignore the context the AI was explicitly referring to because otherwise it didn't serve his bias.
youtube AI Harm Incident 2026-01-25T14:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxWdglDQM7KiAQE2z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzQyrq4UIQF7t9xCYF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzh05lfJjDLsfZPv4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz57paqMxdcB9E5k2h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxVHDvRepdR6ywY2xt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjffzeWEYe19BCu7Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxbYUZq-s80VrHTDzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBJhGIfi8V3aAzQhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzbbxLOKXPXTsu3Av54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwIr7qb4TKZYXwTpbh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]