Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can we fckin stop the sh1t about llms being sentiment, they are trained on alot of human conversations and that reflects in their outputs nothing surprising here, based on human conversations they probably kind of gained the "desire to live"...
youtube AI Moral Status 2025-06-05T03:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugzewb1v1r6QW5UJX9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyPikg4Jsz1pvptuut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVW1AQ1QA6n_Nehup4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy3jcmleevzz7VhY1l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxImK42PvE9dtAmbXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDzOnYnM19XniLXwt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyvWWOfbUU8kM3TVCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9IOknttEd_VuYFIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwkAoOY19o5azc7Qzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwVYASbQtGYRqZBc294AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]