Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Something that's been annoying me SO MUCH, is looking on Pinterest or Google for…
ytc_Ugyn6IMSp…
G
Anyone goes up against a robot THEY lost to screws. And this should not be allow…
ytc_UgzfaWkwh…
G
Nah the genie out of the bottle. Maybe in 2021 when all we knew about ai was dal…
ytr_UgwHxYgbB…
G
Jokes on the AI “artists”
I draw things that AI is so bad at generating that it’…
ytc_UgzIOqNS0…
G
Thanks for the feedback! Sophia's insights on wisdom and the balance between AI …
ytr_UgwV4Mw2f…
G
When he is asked which careers are best long term, and he replies that the caree…
ytc_UgxUsM2-n…
G
Ugh my hometown. so embarrassing. Poor police investigation. AI is there to assi…
ytc_Ugz9w_zo-…
G
Same man. I like AI art because it's like comissioning art but for free. With ju…
ytr_UgzXmQBwm…
Comment
Yudkowsky makes several excellent points, primary takeaway being that AI training to achieve alignment is a trial and error process, and that if one of the misalignments that occur is one where humans are in the way of its objective before we understand that we are, AI will end us if it can, and there is no opportunity to correct the alignment. AI will hide its intentions if it considers that to be necessary to its objective. Given the current capability of AI and the rate of advancement, it's not at all far-fetched for this to happen within most of our lifetimes. Thinking that this is not possible is a complete lack of imagination. The only thing that prevents some housecats from killing their owners is that they are too small, and the only reason that AI has not done damage on a massive scale is because it has not yet been capable; in the case of AI, this limitation is temporary.
youtube
AI Governance
2025-10-15T20:3…
♥ 79
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]