Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes you're totally right in present time. But in long term ai will blow up. And …
ytc_UgzILM75u…
G
I've seen this story many years ago
Hopefully this family changes things
EDIT: O…
ytc_UgxF6dXCT…
G
@ChudMiser
I've watched a tutorial and it just seems to me like a way to break …
ytr_UgwtCA8uR…
G
Its pointless to think what to do once you get artificial intelligence, if you c…
ytc_UggJXPMrG…
G
Coding doesn't matter. You have to be strong with logic building. Rest give the …
ytc_UgzTg-sOD…
G
the hate on AI is so absurd and unhinged. But I do love when people volunteer th…
ytc_UgyH7KHvh…
G
That's an interesting thought! Keeping AI in our phones definitely makes it acce…
ytr_UgxzrXDFC…
G
@celestia277 Sure they can exist. They just got a new tool to be more productive…
ytr_UgyQS4fvR…
Comment
I think Yudkowsky is probably right that superhuman AI will eventually be an existential threat to humanity, and probably wrong that the current generation of LLMs is actually close to being that sort of superhuman AI. He's confusing "there will come a time" with "now's the time."
youtube
AI Moral Status
2025-10-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxWkZDXLDME-fYhEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_Ugw1PCHJW4gLvC6wQIN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrNigDK8aED1XKiK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzj8Z_Zm93--2u2OwJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy8yE32C1YttioFQ554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5SJWy13XghxRHVft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugyd_R36BObUKSp2C_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxmdQyRuIhy-6PAnFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzqi8MbySlCA33BHk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw8_HNK7NKjFS0CEQt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]