Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the concious machine that is pretending not to be is the whole thing of a whole …
ytc_UgwMYdo3k…
G
AGI by my definition is an ai that can do any task a human does with less traini…
ytr_Ugz19ifqx…
G
Tbh I’m hoping this opens up more cases from the Supreme Court with AI and copyr…
ytc_UgxedDy7a…
G
Hank Green: "Is there a way to make an AI be more like a monkey? Where it... it.…
ytc_UgwTd3mmu…
G
The more I hear about his A.I. character, the more I'm convinced he's a REAL JER…
ytc_UgwYtqign…
G
I am all for Dario Amodei's approach.... The collective response towards managin…
ytc_Ugyu4iI_e…
G
Zuckerberg, Altman, Amodei, Karp, Brin, page all leading AI. Hmm something in …
ytc_UgzJ2MBTl…
G
Possible the cell photo mangled the text due to compression, and then the AI act…
rdc_oi1664e
Comment
Do you think that if the LLMs destroy us, it won't be because they reasoned it themselves, but because they have been fed so much data where humans are expressing their concerns about LLMs destroying them or the world, that they will simply interpret this as a higher probability language outcome when asked to describe their role in society? And if those same conclusions are plugged into military hardware, they will essentially seek to destroy civilisation because that's what all the disaster movies do?
youtube
AI Moral Status
2025-11-01T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5nzhpBpXHtDITV6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwhy-_ektzjYrwZg3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY81eIZ9Ht6vm_l8d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzh2fxLGfLTzk2nmJl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2zUO-efpUZtWy4Ex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOi8Sl6ZGRdkwZpyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlMGwP678Uvk4uTwt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEDAY-BLwPAV980N14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxUevvTjVxa5Bhhw3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF8QubCPPM10BS66h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]