Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But that's not what the kuleshov effect is telling you, is it ? On the contrary. It shows *humans* get it all *wrong* when shown a context. So it could be bullshit, but it could also be lots of other things. Humans could get it right without the context. Ai could get it right or wrong with or without the context. Its basically just unrelated 😊
youtube AI Moral Status 2024-08-13T17:1… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzweb9kQDSVnDg2bYN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXFlmOQDupiZzRy_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjjQ9bFzjHRzGwxBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXIvCCGRhilT3foB54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDGJG6hwnaxz2H6Rd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfIyc8DpmK19hjYDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzg-LKU9Wgd0IukAQN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxbQFeMUe2WTGdTCmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCjF-MP0uelJx5yD54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbEQSUwrYPlq1clCV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]