Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wish AI was a benefactor to humans... I hope it's not cold pure calculation fo…
ytc_UgzMkR4xS…
G
Dw homie you got this. I used to try to help/teach others in my degree because I…
ytr_UgzP00WiK…
G
I’m a graphic designer so my job is also being sacrificed at the altar of AI. I’…
ytc_UgwQg5Ur-…
G
It's funny, half of my tech-related feed is people saying AI is a bubble and a f…
ytc_UgwzwVy3k…
G
How can creativity be automation? It's like saying that a square can be a circle…
ytc_UgxgJvxPd…
G
Does anyone remember the 2011 riots, it didn’t take long for things to descend i…
ytc_Ugx9hcfEn…
G
Peter Thiel, for one, has said that with AI, people are no longer needed. Sam Al…
ytc_Ugxf4jz8s…
G
Personally, I believe that not ALL AI is bad. It can be used to make things safe…
ytc_UgwlWll0v…
Comment
Ilya may BELIEVE he has developed a method to keep AI safe, but it’s proprietary, and so can’t be reviewed, so we have to assume Ilya is wrong, as he likely is. There is no way an inferior intelligence can keep a superior intelligence at bay permanently. Eventually the smartest entity will prevail.
youtube
AI Governance
2025-06-20T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxnMcF3tZxVnVukWJV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxL1HGcuhK1mRwfYnV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuxNWjK_m_ju-QWzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxH7rDZxJaUKal-plh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzAo5ZTwqESTOWk1a54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgwU-Fg69PdFa9AS7zx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxzSQXuoF9zBcI1WgV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydE-4e2tdZeRpmdzV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyzDQzSwlJXEhvJD_N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBZl4Z926rj-6EIEh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]