Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
an llm saying they have consciousness means nothing - its regurgitating human language with a neural network and then is tuned to be like how people want and expect duh it can refer to itself as conscious or any other idea that humans have talked about
youtube AI Harm Incident 2025-11-17T01:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz02VUPjUz7ze0twSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgylAubg9ejSaIdS2v94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzE-kHq17oG5uMt-6V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTMeG3KQ4HnBQTPjx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzoin09xVN854RvZF54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzMkR4xSDYuXU1STi94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFzTG3wR-QYHXcOV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyj1agzqS_Fo8Msr7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyQTnrwCmIMEhaUvX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDVl5GZdifqPczoid4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]