Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Democratising"? What's with these nerds thinking that art isn't already democra…
ytr_Ugz5wCcwF…
G
It still amazes me that smart people still think that government can control AI.…
ytc_UgyQg7lcO…
G
The elite that survived AI will get rid of the rest of us, they view us as peasa…
ytc_UgyN4sQV4…
G
how does 20% more efficient translate to just needing 20% of the workforce? Is t…
rdc_n7hjyqr
G
I doubt ai will wipe us out. It would be self destruction for it to do so. We wo…
ytc_UgwdG_KDA…
G
in future imagine people going jail for making overwork a robot and violating ro…
ytc_Ugy4e8oGV…
G
I'm a data scientist with over a decade in the industry with a master's degree i…
ytc_Ugx5XTEF-…
G
The best way to test ChatGPT is to ask questions about something you're really k…
ytc_UgyPDVtG0…
Comment
The take "AI isn't actually intelligent because look at the very dumb things it does" is a tremendously badly thought out take.
Since it basically means you won't be able to recognize AI as having any intelligence whatsoever until it's *exactly* as smart as humans, such that it no longer does stuff that seems dumb to us.
This is also a complete double standard because almost everybody promoting this position would find the idea of holding animal intelligence to such a high standard (such that humans would be considered the only smart animals) obviously ridiculous and unfair. Yet when the non-human intelligence is AI suddenly people think it not doing anything that seems really dumb is totally a fair bar for intelligence.
youtube
AI Moral Status
2025-10-31T00:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOAhKCZFSqdW9DCgV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyC9Nj16yIodrKzbeh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOxV8HDBNkDPyEM5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgytkTPvQZM4GKXfywZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzB82vuGcVtvKMqqnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgztGtETvV79kQ-qzGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRH3awWvdBteycng94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwUYchfVg7rQTer5aN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx16JT_uPYdtqD6HCV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgztRSevDCQx_Flm-t14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]