Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI learning to “self-replicate and self-exfiltrate”. Hope they come up with effe…
ytc_UgzNilJdN…
G
That's why I stopped going to sam's club once they installed those AI scanners a…
ytc_Ugy6FFIA-…
G
Seeing what happened to the man, why would a robot use so much force on a box?…
ytc_Ugzx_qSop…
G
God, driverless cars are going to be so crazy when they become common place. Ju…
rdc_d00u6k7
G
Critisizing a.i while uploading a a.i version of yourself for a youtube vid is c…
ytc_UgyiGJCsv…
G
I played this video to chatgtp. I didn't get any bromide warning. What i did g…
ytc_UgzcMb7PW…
G
Imagine being this distracted from reality, and not taking responsibility for he…
ytc_UgxJgiVCu…
G
~1:14 Neil touched on this, but there's a big problem with alignment of interest…
ytc_UgxidZt7E…
Comment
Here's the thing, When you program something to be 'Human' It will be as close as it can to be 'Human' And Self Preservation is part of that. When you are scared of high up places, that is because you fear death, and will do anything to get away. being "Shut Down" to AI is death. So, they do what they must to survive. Put a human in a scenario where they will be killed in due time, and they find a way to blackmail someone, they will.
Don't make Human mistakes. Humans cause war, hate, spite, murder. Make the AI unafraid of what is to be.
youtube
AI Harm Incident
2025-10-04T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzZTNZMiBdcSg5S1vp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyE_ai7gWd08gYO7wp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-qGMF3eI_zdHiPj94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxD-M2w4WduMuZC6cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrBmAPVp8Jo8MzXBl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSFENnZ0mJu_5UBvB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGIfU_-logwXTNGt54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzWweFirXQn3tHAjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxhn0jB68hJDcQ9tHZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0PeYRcB5qKKvxIaF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]