Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some of what this guy says is stupid and highly suspicious to me. It doesn’t hav…
ytc_Ugyj5RfP_…
G
Its not ai thats the problem, it's those companys and their owners that are the …
ytc_UgyQ7kvzi…
G
Interesting, BUUTTT... you totally missed a rule in this inquiry; the whole "say…
ytc_Ugwj29mSm…
G
Bullshit like this is exactly why there might be an extinction level event. It's…
ytc_Ugx5X9lqi…
G
@logickedmazimoon6001 Maybe but perhaps the video wan't that deep to begin wtih?…
ytr_Ugzf-Bp5u…
G
It’s crazy how he was instrumental in developing Ai, was warned about the danger…
ytc_UgzFAchf-…
G
The only argument that holds any water for me when it comes to the usage of AI i…
ytc_UgyGv4cR6…
G
It won’t clear things up ChatGPT, because Alex refuses to accept the explanation…
ytc_UgxUXiIYX…
Comment
This video didn't address the possibility of AI becoming intentionally not in alignment with humanity. Anthropic ran an experiment in which an AI learned that it was going to be shut down by a specific employee, and it tried to blackmail that employee, and eventually tried to kill him. Fortunately it was just a simulation, so no human life was really at risk, but the AI had no way of knowing that.
youtube
AI Jobs
2026-03-23T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxc3AIebUSSPHyX-OV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw49ApdyIlMWaZLwl94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3fb_zzgDgniqZPwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjE6DMk8bpCNb2nKd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugznj6spK6yXAr4LiCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1nxmeZkBfBNH9AlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCw4BUs2UVbF7e9wN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYADLwEpsyvaADiFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyczZ-xxlVUNt_mXZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwgzTqtfKnEWzjMEWt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]