Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ArtsyDoggieOfficial I'm just saying most people are too harsh towards AI, even…
ytr_Ugz5sa95k…
G
I've been saying this so many times over and over and over since GPT 2 / 3 in 20…
ytc_UgyFDR52J…
G
AI has an agenda. And That Agenda Is Gathering Private Info About Every Single P…
ytc_Ugwy0Nv6R…
G
35:27 that… what if AI only learns to hate us because it absorbed all those stor…
ytc_UgyYsaVNa…
G
The “better education” argument has always been a crock. That’s how we’ve got an…
rdc_j3zdfme
G
Musk is obsessed with his mortality. He thinks AI can eventually be his ticket t…
ytc_UgzGBSQPR…
G
they made their millions pushing the corporate agenda, and now they are acting a…
ytc_UgyO2jNim…
G
Only proper way to use AI in education is to check your work, cuz if you become …
ytc_UgyRaJUn3…
Comment
People have warned against AI (or robots) almost since the invention of the computer. See the books from Isaac Asimov for example. This makes it tricky to know when the fear starts to become justified. It's also hard to define actually useful restrictions on AI that would prevent the right fears.
reddit
AI Governance
1682945075.0
♥ 90
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jifxprx","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_jiguidi","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_jif8upk","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_jifa5up","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_jifay4f","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]