Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what do you see as a benefit to this format over having bard analyze and summari…
ytr_Ugx6EIPKO…
G
AI will have all of the earth’s data, and can make a product that appeals to any…
ytc_UgxUySkD4…
G
yea its over fuck ai. im sick of having my humanity and creativity taken away fr…
ytc_UgwuSXShK…
G
Elon talks about regulations. A regulation to be enforced would be Isaac Asimov…
ytc_UgxnTa00X…
G
Truly disturbing. AI is a tool like a saw or a screwdriver. One problem is peopl…
ytc_UgxnT8iWT…
G
It was so mad because the other robot messed so it got angry and started to go c…
ytc_Ugx66CcX1…
G
Gee, what could possible go wrong? A killer robot uprising? Nah, everything is u…
ytc_Ugyb-2AkT…
G
AI will grow to the mark of the beast. The Bible discribes it well. This is the …
ytc_UgztDVh38…
Comment
On the point about "what's wrong with humans?"
I think we're willing to entertain the development of such a high-risk technology, as a society, because of what has been promised.
We're at a historic inflection point overall, there is massive political unrest across the globe, the economy of the world is extremely unstable, people are starving despite there being more than enough food (due to bad distribution), etc...etc...
And here come the AI companies saying "hey, it's all good, just bear with us a bit longer...we're inventing God and when we're done it'll all be better."
People are either desperate enough or otherwise preoccupied to realize the danger presented, and many feel like whatever comes out of it...will be better than what we have.
There are a LOT of people who have little else to hope for than the idea that AI will save us all. And THAT is far more dangerous I think.
youtube
AI Moral Status
2025-11-03T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwKBnOek438mAagMAd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzxYQRVAegFgHXg7Xx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy1Uh_2A6Hmqz2zX3N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyWlZdsdRzUsOyBErZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyhMjYKq1Cxw9NDepx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRPbCIL-qFBAmgtih4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6tbpjSp5ybqmD2ON4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzOZfFGG5Nz-yNf8cx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNbi0qF58Lo_Arj2B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzuuXOlamvh4ku8XWV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}
]