Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I love the human-aligned shit like “then this AI went crazy!” Or “it was super evil and manipulative!” But we forget those are human-arbitrary alignments. In a truly amoral universe that’s just intelligence.
youtube AI Moral Status 2026-01-25T00:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyEN-ylCbORN4J5iDF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxqMDWfk3JzRwg70V14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy-sJLCH_rWgVsxyrF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwGRA_f7HYY7YRbi_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyM99RUQJKLxSQXL7h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw6qkiZP5FOm16Feu14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz841FN1gxH5SnZjnZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5LW59omHXYbrDB2N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgybrezQ1LZJ0EiPrWB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSNayZnq4MlTgjpxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]