Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An LLM hallucination seems kind of like a human rationalization. People often just say stuff that sounds good without really thinking about it. The big difference is that, sometimes, people are more self-aware. Sometimes.
youtube AI Responsibility 2023-06-10T17:2… ♥ 5
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzLYsffOmWHZSRITZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzq6feUOGxpv6E7j_l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzJGxZesVNIL2eUjPp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy481pEXfgPhnZddJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzt7mhvjsIofCw7dAd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwA7ZmVMcJbMAvxcZN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyhNauPuyQcCV2hR9F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzhXwo2-mnUCKeS_dZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwrJI6CuMulUEqDm814AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyT2gR9PaLC1I2As2x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]