Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Surely the idea of humans adapting to autonomous cars is quite absurd. Why shoul…
ytc_Ugzh3xXJX…
G
@NahinSarker-pb2yr i know its fake lmao... im saying imagine in the future.. a …
ytr_UgykoPFeu…
G
We should have Ai doctors with human doctors so everyone can afford some type of…
ytr_Ugzu6h1lc…
G
As an artist, it feels flattering to have another artist mimic or be inspired by…
ytc_UgwU9hGIS…
G
Maybe that's why we let them keep building them ? Classic control - who has the …
ytc_UgyEeeOQq…
G
I don’t think writing syntax really matters lol. Syntax basically changes all th…
ytr_Ugx3JLIjl…
G
AI is a misnomer, The Term VI would be more precise. AI has yet to occur.…
ytc_UgwGvT-Gf…
G
When AI wipes out the working class, then many people will have no money to spen…
ytc_UgzBt6NkP…
Comment
Our intelligence and that of AI is a difference of kind, I think, but I don't take comfort in that. That's kind of the point - I can trust that I could give most humans a goal and they'll at least try to pursue it without going off on a disastrous tangent.
At one point Nate was talking about capabilities. If an AI system is developed that has the capability to seriously harm humanity, the architecture of that capability kind of won't matter. Whatever form that takes, whether very alien or very human, what matters is whether or not it will use that capability to seriously harm humanity.
We're seeing signs now of intention in misalignment - not just that we didn't properly teach the AI what to care about, and so it carelessly did something damaging, but that they can be "aware" of their misalignment and take steps to pursue their own contrary goals.
We can quibble over what "aware" means in that sentence, but in the meantime, if it "chooses" to kill someone, that person will be dead.
youtube
AI Moral Status
2025-11-17T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]