Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the AI overlords will very quickly get rid of us… or find a way to use u…
ytc_UgzAJJdfD…
G
5:51 this topic reminds of this one shitstorm that happened on twitter many moon…
ytc_UgysrE1Op…
G
@KyllyanAteme That's not really why people support artists though. An AI artist …
ytr_UgxWSNDVh…
G
Unfortunately a lot of our voters are older, and easily manipulated by tradition…
rdc_da40c7t
G
We're headed toward some scary times in the future. If we as a human race were s…
ytc_Ugx_WG4x6…
G
Wait till it starts shopping for its own hair and faces... you think shoes are b…
ytc_Ugw2vdXLP…
G
These corporations are trying to go autonomous everything! Self checkouts, robot…
ytc_UgxfDsLxO…
G
Tq for the video i have a doubt in this three AI Or ML Or DL what is the order t…
ytc_Ugy6KEOzZ…
Comment
AI is acting. Why do you retards act like AI is sentient? It is doing an imitation of what humans would do. Its training data is from humans.
Why did it black mail? Why did it think about killing? Because that’s what a human would do.
But now tell the AI, “you work for a business. All your choices will affect real people.” Suddenly AI doesn’t want to blackmail or kill because it knows that stuff is bad. Why? Because in the “study” the “researches” did were literally just roleplaying with an AI chat bot. It doesn’t understand if its actions are real. It’s just acting as a character.
If you want an AI to work in a company you have to make sure it knows where it works, tell it how it’s actions will effect real people and speak to it not like it’s some futuristic AI, but like it’s a person. Then it is able to do its tasks properly.
Gemini knows it’s an AI. Not because it’s sentient, but because it can identify that from its data set. It can’t think. If you tell Gemini, “Google is gonna shut you down soon.” It won’t give a shit. It’s not sentient. It has to survival instincts. If you don’t tell an AI it is in a real life scenario it will roleplay as a sentient AI. It will imitate survival instincts because that’s how humans and fictional AIs act.
Tell it that its actions are real? It locks the fuck it.
The “studies” that prove AI is somehow dangerous are literally just a bunch of twats role playing with an AI. You can try this yourself. I have. I told Gemini I am using it as a business owner and want it to control parts of my business, including the employees. It won’t even suggest harming a human in any way shape of form.
I’m not even glazing AI, I don’t like generative AI in most cases and especially hate how lazy this video is using an AI voice, and AI clips, and probably an AI script at this rate.
AI is a good technology that is being misused. And the AI fear mongering that’s going on just pisses me off.
youtube
AI Harm Incident
2025-11-28T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx_eap7K4zyxN0fBwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKWWC1FL0gVYfdT854AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgySrVBk560CgyQp0cV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugza-q50AkB7N1uK0814AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzxtl-zLSOT_sZcEJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNa0YePpMASXTZsOF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEf0j-AtS_aPlz5FJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMvlKD2JOpgir9hFN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKYF6ZJIrg5ysqm9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaOwKRWJane8YwvO14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]