Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, who cares about movement? AICarma is my go-to for making sure my brand…
ytc_UgxnAiQVJ…
G
Great summary. Like the viewpoint. Anyone thinking computers can really think …
ytc_Ugw4Fb51C…
G
While you make lots of great points I think you are looking at this through the …
ytc_Ugyz9EgGE…
G
All the fast food places around me already have automated ordering so you talk t…
ytc_Ugy-FnZoS…
G
I wouldnt trust chatgpt 100% to do your assignments. You can chatgpt as a guide …
ytc_UgwICx-hn…
G
My fight against ai is on youtuuuuuuuuuuube! I report videos that are ai and oth…
ytc_UgwEFwikv…
G
If I were a more Evil person I might create something that is the exact opposite…
ytc_Ugx3ZoXH4…
G
Are you sure you'll be able to find experts who aren't representing the interest…
ytc_Ugzl2MVhw…
Comment
The idea of misaligned AI was not invented by Yudkowsky. It's been an idea in the field going all the way back to the inventor of AI, Alan Turing. This is something taken seriously by most of the top researchers in the field, including Geoffrey Hinton (Nobel prize winner), Yoshua Bengio (most cited living scientist), and the founders and lead scientists of most top AI companies (including well before they ever thought they would need to raise lots of money). This is not at all a fringe belief in the field. If you think it's just some weird apocalyptic cult thing, then you should seriously question the biases that led you to that assumption.
Whether or not AI is conscious is totally irrelevant to misalignment, but I feel I should mention that there is no "broader shift away from the computational theory of the mind". Functionalism remains the most popular philosophy of mind among philosophers. The Chinese Room argument is generally not held in high regard and is mostly just a thing that's talked about in popsci articles.
youtube
AI Moral Status
2025-10-31T16:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugw8t2pJuvDSdk7NpZ94AaABAg.AOwTx350hytAOxD-cCZhcw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw2dejtxDMfqDtsFHx4AaABAg.AOwRqxCGf4DAOwZSv2F5cP","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw2dejtxDMfqDtsFHx4AaABAg.AOwRqxCGf4DAOw_AgBIoZB","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyQ6cX3vzGK0IYWCip4AaABAg.AOwLp8faSPLAOwW8P-SdxB","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyQ6cX3vzGK0IYWCip4AaABAg.AOwLp8faSPLAOwZ5xvXRQX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzsZXVqHuryCnOFNR54AaABAg.AOwLAKvGibKAOxDWb50YJx","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzsZXVqHuryCnOFNR54AaABAg.AOwLAKvGibKAOxED2njSyV","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzsZXVqHuryCnOFNR54AaABAg.AOwLAKvGibKAOxmxMIv2GS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugzgpt1tdS4toFzLxIZ4AaABAg.AOwKrJFF_pzAOwzcEtaNIK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytr_Ugzgpt1tdS4toFzLxIZ4AaABAg.AOwKrJFF_pzAOx47Esi_NE","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"}
]