Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But why does it matter if it is AI generated or not? Too l the artists out there…
ytc_UgxVsiwO8…
G
This county is hardly the first to use predictive policing, which is more involv…
ytc_UgyLf6_yR…
G
The rich people will still need to fuck us and our kids though. Until they have …
ytc_UgyN_vgEE…
G
That argument only holds when a technology still needs uniquely human skills, su…
ytr_Ugxf3Pgpf…
G
Microsoft fired it's AI ethics team, Google can grow more than chat Gpt. I favor…
ytr_UgwKZ8VzF…
G
Wow the robot can say human is not concious? They r concious but they ran away a…
ytc_UgwEdgTlL…
G
You work with your head you’re on your way to self destruction. You work with yo…
ytc_Ugx2vcwBH…
G
@ChuckMartin-dm9qe you don't know what AI is. There's a lot of that around. Ig…
ytr_UgzDfIcHX…
Comment
“The right questions to ask are not ‘will the machines turn on us?’, but [are]:
- Who benefits when [AI] succeeds?
- Who pays if it fails?
-What permissions have been granted?
These are not cinematic questions, but they are the ones that will decide whether technology serves human flourishing, or quietly reshapes it.”
From this video:
https://youtu.be/e2P_CLpUXFs?si=dxijfDwC0_HBSeT-
youtube
AI Moral Status
2025-11-26T00:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqjsqfuqVEzcWSs2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPQFLqvk0wc_pE3Ed4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1xsc3KYPErxaMBOJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8CmJOUD1ilLkOLYV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnEZq9A_8AvcwIUZh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzo9TpISftgSBk6lJh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy5VOFsmhM_o97_oKJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxEfsiPeWqSH7g9Wht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxCX80X9CDEhqTQ-PN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzrWfDlRKW3Gxhp7zR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]