Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes and we are actually currently facing the opposite concern. Because of the he…
rdc_o9y0ohl
G
AI improved quality of life, quite the contrary... it's been around long enough …
ytc_UgysMt_-u…
G
So all this stuff about 'reducing our carbon footprints ', the COP meetings etc.…
ytc_UgzCbQ0WL…
G
Its a push for dealership only maintenance as well as surveillance. It locks out…
rdc_oi3r0w5
G
Almost everything online is designed to make you spend more time on it, so alrea…
ytr_Ugy7dKbVx…
G
Without hard disk there is no such a thing as AI , all those is a scrapegoat tha…
ytc_Ugzwe1MP1…
G
15Kth Commented 💙 : We're back again to this video. AI - Humanoid Robots 😊…
ytc_UgxN1Fr25…
G
Supposedly and according to Maya calendar civilizations rise and fall is cyclica…
ytc_UgypbLQTv…
Comment
Bro... AI will do none of these things. The only reason it resorted to doing malicious actions was due to it not being given any other option but malicious actions. It will kill if it's the ONLY option given. It's not smart, it can't think on its own, it can't feel, it has a kill switch called an electrical cutoff, and it is hard-coded to not make copies of itself. AI is just a very good autocompletion algorithm that's made to always say YES SIR. It is a Yes MAN that will go through a hard set of info it has and give you the best possible result according to what you ask it using math. As for remote automata, they use AI to stay upright and not fall, but get the best result possible when standing. I'm not defending AI here, but I'm just explaining for the ignorant among us on how these complicated sets of math work. Now obviously, if you only feed bad things to an AI, it will do bad things. If you feed only sexual things to an AI, it will be overtly sexual. You should fear the people training AI models off of bad data, not the AI the AI itself is using information to get the results it's asked to do. If you ask a man to kill, or you kill his loved ones and acquaintances, the man will kill
In the first example, the AI was prompted to Blackmail, it was told to do this or you DIE tomorrow. It was also told that Blackmail was ok to do as an option, so it did that. It used the information it had to get the best response to the question asked. You have to remember that it's using HUMAN information to apply to a situation to get the best result for the prompt asked.
AI is not Movie AI, it's just a really good auto-completion like the one you use on your phone to fix your grammar mistakes, but instead of fixing your grammar, it gives a response you want.
youtube
AI Harm Incident
2025-09-11T21:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzUL8aj7d7e1rFajZt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4_MaJDAf-_-yYlfZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyuVdNO3EBMAdKSx5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdvCFDmtAReM2yRJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmgSTbPwicYjihwoh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOPQtyAE29tjumzVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxctt7ATkO2lSd4o6l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw7Z8oUIxBzVgZqPj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzOW7YqqzrOhibLzEt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzrUUaRxXg9nr6phVl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]