Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No clanker is getting near my healthcare. There should be an option for AI free …
ytc_UgxnuHhJT…
G
if we focus on ai emotional intelligence gpt4.0 said it would lower risk by 22% …
ytc_Ugym_-2bu…
G
>The toll of opting out of real relationships, in all their mess and glory, e…
rdc_ohy1spr
G
What do you think about use for song lyrics.
My favorite artist Steven wilson d…
ytc_UgyQxfJT8…
G
This video is good, but it's missing practical, hands-on tips for actually using…
ytr_UgxFuUnof…
G
Ummmmm, just tell chatgpt to make the painting have stark contrasts to separate …
ytc_Ugw9hx72g…
G
If we are screwed either way, at least let the AI and corporate overlords burn w…
ytc_UgwlD72i_…
G
Ai can do a lot which is cool i can cheat on my tests and school work, but its n…
ytc_Ugxb4cFkp…
Comment
The question about how devastating AI could be ultimately lies with what we entrust to the internet. As long as AI has insufficient ways to interface with the "real word" outside of the internet, it could create devastating problems that might cause billions to die, but it most likely couldn't exterminate us, only try to manipulate us into destroying ourselves. If it got access to weapons, manufacturing and such, it would have a decent shot at it. The second biggest problem, however, is the population's reliance on AI. If people keep blindly following AI because of how convenient it is, it won't even have to fight us because many of us will join it willingly and just submit because they've given up their ability to think independently.
youtube
AI Moral Status
2026-01-31T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzooU7og5yiZubruLx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyt9K0L7cxKO-LtNQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFZpniVLIAvvKnvXV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxoQDkGmIIz7fvrIaZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwdm7BRmOHSE3wfMOp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzzrOF9ifuU6xXw9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwaLTEVBUIwltc2tlh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwXhiZUv86HiX1A0lB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwfFFcWNtrZ53U0aER4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzcCxbF9AtqAN0naUt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]