Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As someone who is in the AI-field, this is staight-up fearmongering at its finest. Yes, AI is getting more powerful, but it's nowhere near a threat to humans. LLM models lack critical thinking and creativity, and on top do hallucinate a lot. I can't see them automating anything in the near future, not without rigorous supervision at least. Chat- or callbots sure, basic programming sure, stock photography sure. All of them don't require any ceativity, at least in the way they're used. Even if these things are somehow magically solved, it still requires massive infra to handle huge AIs. Also, they're all GIGO until now - garbage in, garbage out. If you finetune them to be friendly, they will. Well, until someone jailbreaks them ;)
reddit AI Responsibility 1710734654.0 ♥ 192
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kvdsj0q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kvej0kv","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_kvelo7h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kvjl30s","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_kvehp8j","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]