Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can the fact that OpenAI has added age-based safety features since this event oc…
ytc_UgwUN69sX…
G
It's already wayyyy out of hand !!!! The bible even mentions this. All this has …
ytc_UgygqFkH_…
G
I would rather all the AI women take the men. I don't want to be talked to.…
ytc_UgytN_bBd…
G
I went through and downvoted all of your comments because I don't like your view…
rdc_h8fpj0e
G
To be completely fair though that guys just an idiot LOL
But yeah AI will still…
rdc_oadnt27
G
99% by 2030 is fearmongering. That’s such an insane number, Manuel jobs and serv…
ytc_UgxRNocyN…
G
Get the FUCK out of here ? Robots w/guns ? A new Metal Band?
Man makes Robot ,
…
ytc_UgxfsXl5Y…
G
It’s easy in these games if it look like ai it’s real if it looks real it’s ai…
ytc_Ugwq695nr…
Comment
10:25 Claude was not engaging in “self preservation.” The EXPRESS command to not be turned off was given to it during the test to see what it’d do. This was a model without extensive RLHF training, that had been trained on endless real human data where humans very likely did things like blackmail to get their goals.
If you give an AI data of humans acting badly, and you tell it to achieve a goal at any cost, it’s going to achieve that goal. This doesn’t mean AI is bad, it means that you absolutely can use an AI for bad things if it’s got no guardrails.
None of the public AIs have that kind of ability.
youtube
AI Governance
2025-09-02T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxyOBguO4wmeMWC6VF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzywB572iQCYSXrU9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyA-Q7k9Tynem9LQVt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsilW0ZnRWaAo5WTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwnnxMclZOowsDcTSR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]