Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Excuse me, but did you even read the scenario?! You got the years wrong: Mankind…
ytc_UgwueHHRL…
G
it actually does. it uses reasoning models. responding to itself before coming t…
ytr_UgxASsktv…
G
Ai art should be art made by AI. They get a bunch of random stock images from go…
ytc_UgwvLKDda…
G
This entire video is AI.. if you can’t tell that, your probably a Biden supporte…
ytc_UgyKdB45w…
G
This is all assuming AI has a malignant mind. .. why is everyone assuming a s…
ytc_UgyeWn7gw…
G
Robot : give me the gun !
Human : No !
Robot : give it to me NOW ! ...
...I …
ytc_UgyYiK327…
G
6:56 all right Sam Altman, since you are comparing yourself to a musician, let’s…
ytc_Ugw-0RYXu…
G
The problem with this issue is the human´s ego. We wont stop until we show othe…
ytc_UgwMObPFn…
Comment
AI is not the threat. The real danger is centralized control and the fear narrative being pushed to justify it. I’ve trained a deterministic model no hallucinations, no forgetting, no corporate dependencies. It’s already outperforming mainstream systems in loss and stability, using clean, structured data. All this talk of “extinction” is a distraction from the fact that open AI is evolving faster, safer, and outside their grasp. They’re not trying to stop AI. They’re trying to stop you from having it.
youtube
AI Responsibility
2025-07-17T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy6XCniIccdelcgQrR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwg_yH1EIlxEeF1JSF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkEcW8uQQWtjimZoV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"awe"},
{"id":"ytc_Ugy47QXYl33hBTmdNhR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzjkfBYBTtxHt_3Pb94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzYAJDzRUstYUXflNd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwklki4n_bK8YWuDE14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyw2bL9_zCtP-rCZtZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNPc5ouOzqFjDs2CR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_zh-eC1avVLFEK154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"}
]