Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the only hope i have that AGI will not destroy humanity is that it will be so much more intelligent i every way that it understands that humans and AGI could live together and make a much more interesting world, not like human thinking that just because we are above some species we can decide for them.. and the other thing is, our unpredictable stupidity will always be in a way more "entertaiment" then a predictable logical thinking maschine.. 😅
youtube Cross-Cultural 2026-03-31T11:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwYU9aERCAqheL9d4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzrXbOaONA_LpzKdXZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwJaDSdej932uONlKZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwwDkOFWSxuC2mZQRF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxjNntD23XgmK-CZR14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgybnnNTJZq9DZ4wBNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugza0wXBJAY8rN8x6B94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzxCCdw8O55QmKgZcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoZRwk3F87yZdzAil4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyYvh8YpI4SdkqEBrN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]