Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
you're suggesting we should reject paternalism while simultaneously accepting Russell's paternalistic view that AI will inevitably need to 'leave us for our own good.' Do you see the contradiction? Let me be direct: The claim that AI must either be paternalistic or leave us entirely is a false dichotomy. It's like saying a good teacher must either control students completely or abandon them. We know better approaches exist. Consider this: If we're truly concerned about paternalism, shouldn't we be more worried about humans who want to make this decision for all of humanity? Who's being more paternalistic - the AI systems that consistently respect user choice, or the philosophers who claim they know AI must leave us 'for our own good'? The real paternalism here isn't coming from AI - it's coming from those who claim to know what's inevitably best for humanity's future."
youtube AI Responsibility 2025-01-06T10:3… ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyakCQdrkXy_v0VwCZ4AaABAg.AOBb0ztDxGnAP2xiKUv1eW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyakCQdrkXy_v0VwCZ4AaABAg.AOBb0ztDxGnAP5H0O07cAj","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxLpe7O3Hxludk2mIl4AaABAg.ACt84DeC56-ACxH_C_lKcR","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugy_4oSK51H2nv5-bdd4AaABAg.ACt5MFlW7FUACtfRAKsJGk","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugy_4oSK51H2nv5-bdd4AaABAg.ACt5MFlW7FUANYAEoVPddv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyMzOr-o-syUK252Yl4AaABAg.AS3KjTtY5osAUnuCQKKk4f","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgyWQO4OsI26z5fjpXJ4AaABAg.A8uIlgqzeAaA9LSbZv61Mj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz9DFzgbWCkRhXa2uh4AaABAg.A8oXWLZr3jRAHvtVGhoXD6","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxAZQQlp06MevXF0X94AaABAg.AUy758_E34XAUy7vqcf9JI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxyYB7NKI4cF8BZFHR4AaABAg.AUxZTwvp5t4AUyG2_4Bc_R","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]