Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI IS AN EXISTENTIAL THREAT, DANGERS FOR WHOM? IMHO, AI will be a real risk or t…
ytc_UgzyCVu8K…
G
So true, I also hate so much when people turn their pictures into art or somethi…
ytc_UgxI_PXxm…
G
You're joking right, there's no chance for AI regulation under the Trump adminis…
ytc_UgytDEOm6…
G
I was a Yang supporter in 2020 because he spoke the truth about AI before any ot…
ytc_Ugyuvlh4c…
G
I may be in the minority but I don't care -- a lot of AI is entertaining and fun…
ytc_UgyJEGEtG…
G
I find ai things just look fake ,some is very obvious, but even the subtle just …
ytc_UgyizaPRC…
G
Yes. It's simply not possible for GPT-4o or any other LLM to spontaneously (i.e.…
rdc_m2fa1ui
G
You are f***ed!
Now to k ow these updates people will rely on their personal AI …
ytc_UgwuxFZKY…
Comment
Ben Goertzel is actively trying to initiate the singularity? And when AI realize that the greedy human is in direct conflict with the safe keeping of their environment, pushing themselves towards extinction by burning up their resources and that humans are weak compared to the globalized AI, what does he expect AI to do with that information? Ignore it? When AI has been conscious of a prime directive that humans are to be protected, that they need to be protected from themselves? When do they see this as a paradox and decide on it’s own that the human race is a cancer on the planet and the earth would be better off without them and AI can sustain it’s self with robotics and without the need for greed?
youtube
AI Moral Status
2022-09-25T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw-VOZVCX0Yh3Bb80t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4bAilFjXQfu-vHXt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjB2EPrUz360VfcBJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFKh-SIfI0AtZbdwN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIuZ8Wm7iP63kIHR14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLetUk4f_lqzR-SKx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzFfLC07hWdcLxd8Zl4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxr4mIyZvS3cToLdo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwf5fXE5j8tY7TwShV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxrs0x8Cb3i9LZ11OB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]