Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Plan of Counter-attack: Some people are apparently forgetting that gay men exist…
ytc_UgyIeTC9q…
G
I am shocked how dumb this explanation sounds. He basically said nothing and jus…
ytc_UgwN6bQ1H…
G
Wow, I’m early lol Not surprised deepfake stuff with AI is getting out of contro…
ytc_Ugwa9wq2M…
G
*After being distracted by Sayaka Miki music*
Passion is a big part of it too. …
ytc_UgyfxJq_2…
G
It was Funny when a Robot attacked a Scooter One Time on another Video, Competit…
ytc_Ugxd0aEt9…
G
I will watch the rest, but I do think the first 45 seconds of this video does pr…
ytc_Ugyn8OoCn…
G
For AI to 'get rid of people' would be similar to a man becoming dictator by kil…
ytc_UgxRcEmO2…
G
Technology is there to free us for more worthwhile pursuits than needless labour…
ytc_UgzIzif6K…
Comment
An AI model that creates AI models would be terrifying. Each grouped AI model could be a node attacking specific infrastructure while others are attacking banking information, for example, and because both are under attack simultaneously it would be difficult to track down the origin. Especially if there are honeypot AI models designed to thwart lookup origin and generate new AI models at-will.
youtube
AI Governance
2025-08-26T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy29htWaxqJDB78gQt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGYLH5aYrwyIkskcF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweOTT0F05j-9FAtnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNGyO4loNUPQeZflt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwhc49xB8f29Y0bMoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyRBOdvVpDVfOrCjSh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZQ5z0fSTwahdr1sl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy8i3mSArc_JP5FQn54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyyyuyTzLAkpe6heD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGT3_jPQeL0SR9SV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]