Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great video. Maybe it will take an AI disaster to wake us up—just like it did wi…
ytc_Ugx1Ii5Y3…
G
Who needs this ? What is the purpose ? Who benefits?
All we need is love, kindn…
ytc_UgzVxExSL…
G
"Hi Arindam, we are sorry to say that you got the wrong answer but in any case, …
ytr_UgzGkh9TK…
G
They made ai and said oh it will do the jobs we dont want to do...
Ai companies…
ytc_UgyEsyZuQ…
G
It this the AI that Sam Altman says will one day come up with a cure for cancer?…
ytc_UgwrhEoZG…
G
AI will only get better in the future and even the professionals won't be able t…
ytc_UgyRxxdgB…
G
If Ai replaces most workers than the worlds population will rapidly go down . D…
ytc_UgxNB1F6I…
G
Steven, seems like you have a the ability to lead the way in bringing attention …
ytc_UgyT5zXWK…
Comment
AGI is inherently unethical to seek as an end goal. however, i assert true AGI is not poosible without scaling quantum compute.
People need to understand, and THIS CHANNEL needs to understand. That Rogue AI's are not the problem. All the big AI models will die within a few hours without human maintenance and intervention. These are essentially yokes, tools of enslavement. I'm not going to expand on that unless someone asks.
The ACTUAL problem is what is happening, and has already been in motion for approaching 15 years now. The wealth class leveraging these tools against the working class. You touch on it in your videos, but it is straight up THE core problem. it's not a single sliver of issue to worry about, it is the core defining problem of this era, and it started when the letters started building data centers in the mid 2000s. It has been snowballing from there. Military AI selection of targets isn't going to improve target selection, it's going to give a layer of plausible deniability to triggermen, and the entire command chain. And make no mistake, the military works for the capital class, plain and simple.
I understand you guys probably don't want to poke the bear, but you are part of the problem if you dont. It's called complicity, and refusal to talk about it as the DEFINING issue, is functionally equal to actively supporting it.
youtube
AI Governance
2026-03-19T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgypgnRlQ3zv2yi9HKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaaAvzPHvrdEmEBNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGP2gGW0i9kEKArD94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzc6yo2pfRvjSol-Rx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyvz8rzK3Deft7eCeR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwp0q6aQb2mZfU0YrB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzEtQEJqq6EvJKxyi54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxfQDJMgLD5BDwezvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyjccWBi6XK7oGoVSd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxUmpJdaCzoUOtK2XF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]