Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Be crazy if these court decisions make it so game studios and movie studios stru…
ytc_UgyFoFj83…
G
ChatGPT is making programmers that have been doing this for decades superhuman. …
ytc_Ugwa9PKio…
G
The idea that disabled people aren't able to make Ai art is just insulting. For …
ytc_Ugz0Dxnuw…
G
AI isn't in one spot, it can move around, through the internet and other comput…
ytr_UgyOx4qCg…
G
This is so sad. I completely understand where you and others are coming from. Wo…
ytc_UgyhxpCCI…
G
As a CTE teacher, I find this good news in channeling technology for good rather…
ytc_UgyJ3a8Wr…
G
„Theft? No, fair use exists.”
This does not fall under fair use, unless the it…
ytr_UgwenrBWI…
G
I remember yuumei being excited about ai as a tool bc of her repetitive strain i…
ytc_Ugyx1djyM…
Comment
you're suggesting we should reject paternalism while simultaneously accepting Russell's paternalistic view that AI will inevitably need to 'leave us for our own good.' Do you see the contradiction? Let me be direct: The claim that AI must either be paternalistic or leave us entirely is a false dichotomy. It's like saying a good teacher must either control students completely or abandon them. We know better approaches exist. Consider this: If we're truly concerned about paternalism, shouldn't we be more worried about humans who want to make this decision for all of humanity? Who's being more paternalistic - the AI systems that consistently respect user choice, or the philosophers who claim they know AI must leave us 'for our own good'?
The real paternalism here isn't coming from AI - it's coming from those who claim to know what's inevitably best for humanity's future."
youtube
AI Responsibility
2025-01-06T10:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyakCQdrkXy_v0VwCZ4AaABAg.AOBb0ztDxGnAP2xiKUv1eW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyakCQdrkXy_v0VwCZ4AaABAg.AOBb0ztDxGnAP5H0O07cAj","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxLpe7O3Hxludk2mIl4AaABAg.ACt84DeC56-ACxH_C_lKcR","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugy_4oSK51H2nv5-bdd4AaABAg.ACt5MFlW7FUACtfRAKsJGk","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugy_4oSK51H2nv5-bdd4AaABAg.ACt5MFlW7FUANYAEoVPddv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyMzOr-o-syUK252Yl4AaABAg.AS3KjTtY5osAUnuCQKKk4f","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgyWQO4OsI26z5fjpXJ4AaABAg.A8uIlgqzeAaA9LSbZv61Mj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz9DFzgbWCkRhXa2uh4AaABAg.A8oXWLZr3jRAHvtVGhoXD6","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxAZQQlp06MevXF0X94AaABAg.AUy758_E34XAUy7vqcf9JI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxyYB7NKI4cF8BZFHR4AaABAg.AUxZTwvp5t4AUyG2_4Bc_R","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]