Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just stopped by a Wendy’s - I don’t know why they picked unskippable commercia…
ytc_UgwZBuey2…
G
@doggo5263 and do you really believe we want to watch AI animation over people w…
ytr_Ugwfi8aRC…
G
Turns out this whole video is ai! All jokes aside w burnE please continue to edu…
ytc_UgxyJY4z-…
G
If people still need to work using AI, there is no 20 or 30 hour work week, ther…
ytc_Ugz0iLGAR…
G
"in a true emergency it’s a human, thousands of miles away, who is expected to …
ytc_UgxFWlux5…
G
Then we have people capitalizing off spending their time making AI art specifica…
ytc_UgyOldib5…
G
The whole AI existential threat propaganda is nothing but a lie, no expert, no p…
ytc_UgwPcMwna…
G
something people forget about yang also what's to change capitalism. human capit…
ytc_UgxfkcvLm…
Comment
Talking about AI eradicating humanity is the kind of fearmongering that totally undercuts the credibility of these discussions... even with extremely smart experts.
That's just a far-fetched hypothetical, while the reality is that the sociopaths running this new industrial revolution are going to decide what happens with society in the coming 10-15 years - which should be scary enough. We do not need AGI (or a "superintelligence") to eradicate us - megacorporations running rampant can do it perfectly well with mediocre specialized AIs.
youtube
AI Governance
2025-07-08T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgytTPC2pQULLZK9SaN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEFklYoBQzi3UqmAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtApoo-gRkiYxXhrx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOhSlQ1P65oU1nJ0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtVsRGB0Z0J70flLJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQU945vtEJH6Y7goF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxl_yO-DX0ufjLUL3x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1O44GgOMKixh9y0l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxRvb5t07ChFGdBQ1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzAW0kF3poJJXZjRj14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]