Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Me, I was working on small trusted AI before my university fired my professor an…
ytc_UgzL9a2qK…
G
AI could be very benificial for a composition generator, where it generates mann…
ytc_UgxXE0mFV…
G
I'm someone who legally disabled (autism) and I always hate the argument people …
ytc_UgwjRnP4g…
G
Chatgpt cant even figure out how to write a testcase properly for my stupid code…
rdc_lqrtqve
G
“My strengths are anatomy and character” -The guy using AI to help him draw both…
ytc_UgwKzhact…
G
Don’t lie, cheat, or physically harm anyone. Working the hardest to understand, …
ytc_Ugw8GMK-u…
G
Old-style software engineer who majored in AI back in the 80s.
In short, no. Co…
ytc_Ugxnzx6dn…
G
There are strong cases to be made for why you might not want to use AI for thera…
ytc_Ugy2BevRL…
Comment
I work in an AI field and have published a few papers and I strongly disagree this is just fear mongering.
I am NOT worried about a skynet style takover, but AI is now being deployed in critical infrastructure, defense, financial sectors, etc. and many of these models have extremely poor explainability and no guard rails to prevent unsafe behaviors or decisions.
If we continue on this path it's only a matter of time before "AI" causes something really stupid to happen and sows absolute chaos. Maybe it crashes a housing market and sends the world into a recession/depression. Maybe the AI fucks up crop insurance decisions and causes mass food shortages. Maybe a missile defense system mistakes a meteor for an inbound ICBM and causes an unnecessary escalation. There's even external/operational threats like mass civil unrest when AI takes too many jobs and governments fail to implement social safety nets or some form of UBI. And for many of these we won't even know why it happened because the decision was made with some billion node black box style ANN.
I don't know exactly what the chaos and fuck ups will look like exactly but I feel pretty confident without some serious regulation and care something is going to go very badly. The shitty thing about rare and unfamiliar events is that humans are really bad at accepting they can happen; thinking major AI catastrophes won't ever happen seems a lot like a rare event fallacy/bias to me.
reddit
AI Responsibility
1710752394.0
♥ 74
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kvdsj0q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kvej0kv","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_kvelo7h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kvjl30s","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_kvehp8j","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]