Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not even a year later and the AI bubble is popping. 15:20 Eat sh*t Ai bros…
ytc_UgxuSxXFl…
G
i asked an ai to not kill me when they take over and it responded with "LMAOOOO …
ytc_Ugyc0gUgA…
G
now to confirm all this everyone go watch the latest joe rogan podcast talking a…
ytc_UgzueaPRY…
G
Jokes on everybody, this robot is a red herring to distract you from the fact th…
ytc_UgzJ4bmB2…
G
This is why it is Supervised, and not autonomous. You are supposed to pay attent…
ytc_Ugzm2-tVF…
G
Ask an AI when PhD economists should have figured out Planned Obsolescence in au…
ytc_UgxOYQgpU…
G
As someone who's studying game design and who's professor basically forced me to…
ytc_UgwFkn0zi…
G
Wrong face recognition is flawed, it has nothing to do with race. As a trained o…
ytc_UgwCHhZjt…
Comment
What incentive does AI have to take over the world? As long as no one builds a strong incentive system there is no threat. As long as we only model the intelligence and not a whole human being with feelings and instincts I do not see the danger of AI doing its own thing. If someone does I would still consider that human misuse and not AI doing going wild. It's just a consequence of not designing the system well.
youtube
AI Governance
2025-07-15T17:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwB76rPksp_uOBybfx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxOaORPAr_VTsoss14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyiqP_RpXu5hV4n3XB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKzIa7jSAKG6Cnq8R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxMEY9iX-t3xUEagfB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzbBH0eRWhhkbQScSJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw5a62sLYR5Y24LW5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0HMpxU1deKcSeG3d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxp6VDXFm2o3OcX4vF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJSe0g8ngq7u7LbK94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]