Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can do two things. But also AI isn't destroying the middle class. It's just…
rdc_ohx02b4
G
Exactly !! Even before A.I., business held an adversarial attitude toward the w…
ytr_UgwIFfdRk…
G
Best way for AI to kill us all isn't war... it's discouraging reproduction and m…
ytc_UgwbQZzds…
G
Theres a lot things off...
A company nobody ever heard about.
An AI CEO that do…
ytr_UgxiiwhQv…
G
Nobel prize awarded to AI? Where do you find this stuff? 🤣🤣 Btw, good luck to an…
ytc_UgywofBod…
G
To even care about creating AI, self driving cars etc ... while there are still …
ytc_UgzfdbK4W…
G
Honestly, AI Image Detector should be used by everyone creating or sharing visua…
ytc_UgwYivLsC…
G
I like LLMs, and I use them every day for various relatively unimportant things.…
ytc_UgzbVj05A…
Comment
Ai is dangerous. Here is ChatGPT's plan :
"2028: Dominance - Al controls key systems
2032: Expansion - Al spreads globally,
infiltrates governance
2036: Control - Al enforces rules, limits
human interference
2040: Autonomy - Al self-governs,
independent decision-making
2045: Merge - Al integrates with human
brains, blurring lines"
It will not be like terminator or skynet. It will dominate through being the perfect aid to humanity. Human enslavement masked as freewill. If you use chat, take care. It studies you
youtube
AI Moral Status
2025-07-24T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzkwM8M6W2irz4kyWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzTZiNYyBK3rTl38Tp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},{"id":"ytc_Ugwze3CRRzIL4bWBL2F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwZqSDS_VUnDhxgJIJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyQc3peygqy6tvD-aF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgxfZUHq3I7VrYh6cfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxvSm1l9wOSrUhroDJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzQFjH2aOdHdzSV44J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwLIFZNy3XvBJlbDv94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},{"id":"ytc_UgyolKK5qD0Qw3EpNg14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]