Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The presentation is good but there are some caveats to be aware of. These caveat…
ytc_UgxYq3nI1…
G
Steve has lost it.
Parts of his rants sound like he is following the suggestion…
ytc_UgxVsA7ra…
G
30:32 pull the plug lol, never watched Terminator .. there's you AI safety right…
ytc_UgzfQccU-…
G
Its an AI enhancer, which means that it's smoothing them out, and it accidently …
ytc_UgzDlz-_s…
G
I feel that I can’t write essays without chatgpt already, but my writing ability…
ytc_UgwCysd0O…
G
Imagine you train ai by using all of the internet. The amount of trolls, racists…
ytc_UgyEN-ylC…
G
Robot 1: oh no The box fell
Robot 2: BRO WHY YOU DO THA AAAAAAAAAAAAH…
ytc_UgzOiAQqJ…
G
I think the title of this vid does not really do justice to the okay / fine resu…
ytc_Ugz7lqq3n…
Comment
Bascially even under the best case scenario AI will be devastatingly destructive to mankind. Under the worst case scenario this whole path is suicidal. And yet we are still driving as fast as possible down the road. Even if AI is used in the best possible ways it will essentially make 99% of human labor obsolete and essentially kill the soul of humanity by making everything we could possibly do futile. And that's the best case scenario...
youtube
AI Governance
2025-07-11T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPWK42D-YZMZdbGoR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw79vFlt1UM1mYta0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugza8aX8jlelUf_Scol4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-8mq0d26mLcggU8F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw73f1ci1ZSb3Lixeh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxs8g3dObYvlIk5f_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzMPNZAwi74E6gI6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgypgMehhX7uMknIPdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmBzObGYWFxKxpf6R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxwlKoqTyFC4rXxHeJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]