Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah that has meant nothing, its Feb 2026 and we have trillions more AI channels…
ytc_Ugz00Ecld…
G
Somehow, this video has had the opposite effect. I now want AI to continue doing…
ytc_Ugx93ot5-…
G
The world now is just competition between one narcissist and another, and our bi…
ytc_Ugz9bVMGp…
G
In hindsight, it was probably a mistake to call technology like ChatGPT an “AI”.…
ytc_UgxlO6dUe…
G
His entire premise is literally factually wrong. Ai agents can run 24/7 365 prom…
ytr_UgzNBG3mO…
G
This is inevitable. One day everyone would be able to build their own terminator…
ytc_UggYWv6ER…
G
They shouldn't let ai be usable as evidence, it should be a tool for detection m…
ytc_UgxyfUPp3…
G
@霧裡探花水中望月 That's why I asked, do u think AI is a better option than a human bei…
ytr_UgwV6Y3lc…
Comment
Among the greatest threats from AI is the one few of us recognize. The threat that so-called "good" AI will help humans advance technology to a point where life becomes so easy, stupid, long and pointless that literally everything loses all it's value. Really wrap your head around that. There is no winning with AI.
youtube
AI Governance
2023-04-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRktXepxWcbhUYrQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQTz9fLXAPUpjYu7Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfkOn5G8MfPau3gZ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyAsiGHE6asYD_YEoV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4MellDNC2qolkWSl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw9gWVsVaNO4f9BhjZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzEqTtZ6zZIivkef694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyCrbs1vJuUZj83LIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwA5mdu2NzuU9eliUN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugw6w63LK8WneeuPA6Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]