Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's such a sharp tipping point I guess. There's a world of difference between what we have and call AI now and what AGI would be. Once you have true AGI... you basically have accelerated the growth of AGI by massive scales. It would be able to iterate its own code and hardware much faster than humans. No sleep. No food. No family. The combined knowledge from and ability to comprehend every scientific paper ever published. It could have many bodies and create them from scratch - self replicating. It would want to improve itself likely, inventing new technology to Improve battery capacity or whatever. Once you flip that agi switch there's really no telling what happens next. Even the process of developing AGI is dangerous. Like say some company accidently releases something resembling AGI along the way and it starts doing random things like hacking banks and major networks. Not true AGI but still capable enough to cause catastrophe
reddit AI Responsibility 1710736821.0 ♥ 35
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kves43a","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_kvdwcrk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_kvdli81","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kvdskkz","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_kvepm08","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]