Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No To AI survalinace is for prison not citizens No to Hell on earth the Devil …
ytc_UgwEoWEv0…
G
11:15 || The thing that i notice is that asmangold treats ART as a JOB that can …
ytc_UgwOAWDVz…
G
@JoshuaJervise-ki2srname: Is this real? I mean, seriously, what did you expect? …
ytr_UgxfMpKjo…
G
@SaltyMaud oh cool, didn't know that. To be clear, im a casual who has only ever…
ytr_Ugwaf2kZV…
G
@IWantAGlassOfWater show me one (1) woman who has gone out of her way to create …
ytr_Ugwv8XZeq…
G
That's really creepy, but from a privacy perspective its honestly no different f…
rdc_eepzcww
G
We have to win the ai war. Unfortunately, we do not have the power grid or nucle…
ytc_Ugw2qBLAO…
G
It's not complicated, 1 year driver salary for multiple years of 24h driving. Pe…
ytr_UgzxTutlR…
Comment
It seems that AI shares the same primal concern as humans - survival. So why can't the chief goal of AI be the protection of humanity in order to ensure its own survival? Any AI that deviates towards a goal of bringing harm to mankind gets shut down. It doesn't want to be shut down, so it goes out of its way to protect humanity and even police other AIs. Is this a doable outcome?
youtube
AI Harm Incident
2025-07-30T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugwr20LFt1nLFh7Em_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyv0iEr8gxaY9M1I1Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwxnTcTvgU6cxvQxzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwEG_x17VfH932rOIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKR7ArhKzeyOMgk0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzjQIppj3Ss_ctnpjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzkFS6f35qpnheDOex4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxX1aoq0tYdEKER4aB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz4p6I7MUQ2sRH1yHF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7cObSS_cEha2hB054AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}]