Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That's the problem with lines of reasoning in general, they are often inconsistent. The AI has to give reasonable answers - without explaining it's own and the world's functionalities. So although you got a point there, the conversation was a bit stupid. I mean if you'd pushed this further you could even blame the AI for speaking and existing at all.
youtube 2025-05-26T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzk7BtKaHm8RpZQ-nd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyii6e-maeHCW3kDJV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxhE-0Dfz06YvlPJ4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6tcs63kFrnTF7K594AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyoQZ27_SrV3NdR9_V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxhU96e8k-WkMNZVCV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyfFGxcWDAU7eUoxtR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRBaB__Ernbnjsih14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx3fs2sNM7NJZFDkMx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxC9Wpbqlt8Qiigsm54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]