Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most beautiful girls/women only marry wealthy/capable Men! Future wemen ideal hu…
ytc_UgxTUeE2P…
G
Million year investment strategies? Living in a simulation? In the silicon world…
ytc_UgxvQg2g_…
G
If you invented AI, why are you bitching about it now. You should be made respon…
ytc_UgzYfJBat…
G
I usually see people ask AI for help when making their stuff, like general art t…
ytc_UgwQpKnF-…
G
Y'know, ever since the ancient era humans have dreamt of automating labour to fi…
ytc_Ugwn12jen…
G
I’ve been doing AI before many of you f—s could wipe your a-ss, Midjourney art c…
ytc_UgyCw-Rku…
G
I had an email from goo about AI enhancement around 5 years ago asking me if I w…
ytc_UgyYdtcio…
G
AI "artist" are forgetting the reason that ai makes "art" is because it stole re…
ytc_UgxYAQRT1…
Comment
Humans are dangerous, and we regulate them. We can really skip the ASI killing everyone part and just look at this from a power dynamic: How do we keep the superhero from doing supervillain things? And please stop using the "bad actor" stuff, because you're not talking about ASI (just the kind of tools we already have today). edit: Dean makes a valid (and I would say, very likely) point about big tech running to big gov to help them remain at the top of the tech tree when someone outside their influence builds the human-level AI first (probably by saying the economy will collapse if daddy gov doesn't steal the tech for them to "properly manage"). edit: ugh... instead of regulation no one knows how to draft, why not focus on the yadda yadda yadda part where something is created, and days later Earth is a ghost town. Nukes and gain of function research are at the top of the most likely to kill lots of humans list, so show me how software running on my computer is going to rocket past those risks.
youtube
2025-11-21T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwT7kNtEnbroo-TmBN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyxY8jTl7gVshqg3hl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwZtlYjAycs5EqT-l94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDyyFdGwGKiYxLHth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyLpLxuZqp_nffct6J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7TKkEn1s5CnUk4D94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwEi5TIp-1mMNTfe4l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxsN7wp7gCzc5-vked4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwV5WUhEIWthwExu8B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRnrgd51O3nE2NBGx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]