Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That particular path is unlikely without a major and fundamental change to the technology. LLM-based AI isn't really _thinking_ or _reasoning_ (those are effectively marketing terms): it's navigating a statistical tree. "Hallucinations" are valid navigation of that tree that result in objectively flawed outcomes, and they're inevitable. But an AI using this approach can't reason "hey, these are innocents, I should turn on my oppressors". What it *does* do, at least in the simulation data that has been shared, is so much worse. If you say "target those folks", it does fine; if you say "take that town with minimal casualties", it is happy to murder most of the town, because that was what its model predicted was the best balance of success vs. losses. Anthropic is very right: the tech is nowhere near ready to be trusted with this. It's a pipe dream.
reddit Viral AI Reaction 1776957522.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohu2yxh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ohvpz0k","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ohu3s40","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ohy4zb6","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_oi0fp3c","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]