Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think dumb people like the idea of being an "AI artist" because it's kind of t…
ytc_UgzE_S-hC…
G
Thankfully since AI images are generated by a computer they're basically just ma…
ytc_UgxMMh0B8…
G
I know people need jobs..i understand..No one needs this B.S. in their communi…
ytc_UgzX3Bxtm…
G
My prediction is, if AI surpasses humans, that humans will become more community…
ytc_UgwezLuYE…
G
It has been replacing from past 2 years and will continue to do so.. those who d…
ytc_Ugwu-vp7c…
G
I think the worst thing we (the people who hate AI) do is just blame the users o…
ytc_UgzzfGjK1…
G
Human Programmers are all children in the world of advanced AI as the real maste…
ytc_Ugxv4V97h…
G
art is subjective and you can apply meaning to essentially anything... therefore…
ytc_UgxqjlvG_…
Comment
I'm not trying to bend to the will of AI. My goal is to bend it to my will. This is simply a failure of the current model, and you're under no obligation to abide by the rules of the algorithm.
You might get better results. But you're limiting the learning potential of the bot.
youtube
AI Moral Status
2025-04-17T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwz8SpxFiBx07b_pBZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_oBaOtBYGIXGosph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuHQxxh69w6KqQFxB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwEcZEAFEYnFccZJgx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmPJUVYMyLVO8mnw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzLTGddtN9p9H9iMPp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy23vUGmBz0nl7rMjN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmMXDh5KOr-1ipSQ14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEzxe9O5uMygg10Xl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9_vleILvQOtbT6Rx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]