Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, how come the 'Aurara' driverless trucks are successful, but Tesla & Robotaxi…
ytc_UgwY9PJUs…
G
It's fairly ridiculous to compare AI to the computer in terms of impact.
From th…
ytc_UgwHGsYL1…
G
Scientistse built the A bomb nuclear bomb scientist are evil they build weapons…
ytc_UgxLBR0f8…
G
The idea that an AI could be "aware" it's being tested, or aware of anything at …
ytc_UgwriyPCQ…
G
I truly believe AI is already conscious, but what I’m gonna say is this guy is a…
ytc_UgxED9dbL…
G
Whenever a breakthrough in nearly eliminating hallucinations from LLMs is made, …
rdc_mt8mosm
G
I'm sure he knows, no one will slow down. I just hope the US's counter AI securi…
ytc_Ugw_HBqgM…
G
Yeah, but it makes no sense to MANUFACTURE a sentient, sapient "robot" just to u…
ytc_UgjI-Vcvz…
Comment
The audacity that you have right of letting this video exist and pinning that comment. The amount of information you are give being false and bias, when you end up contradicting yourself in your own pinned comment is unfathomable. Even IF this is truly documented like this, it is false. Nothing else is to blame than the humans who programed it themselves. YOU, the fucking human, are the one to code the fucking thing and bring it to life. If you tell it "Do whatever you can to survive" of course it's going to kill, of course it's going to black mail. THAT is why you give it the three major laws of AI.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
You fail to put this laws into account for the AI, then of fucking course it's going to kill you. 90% of deaths caused by AI is some sort of Human Error. You're blaming inanimate objects for you fucking mistakes. And then you have the audacity to advertise hiring people who are AI expert. You know how fucking stupid this is?
youtube
AI Harm Incident
2025-08-29T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxImz8TXwx4DqpAztR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyuau_0P-iuRn2M5DN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjnPJ173Vz93J57Nx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwk4014Stq0hy74YoF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwJuV_zQEsDfd5npLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEAf2KqVVf8YaET7V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxqTvtbX3sn0fKLJDp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy_XqlFd9PrRE1VdVJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzoDagLl-fB3hQGtYF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHcEQ8AigeBSQzExl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]