Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They have been wrong and late and that sucks, so now they wish to be proven right and this desire makes them believe it is more likely to come true and anything going against this is pretty much discarded as meaningless. They try the algorithms and when it helps them it was just easy anyway and when it makes mistakes it's proof that they were right all along. Also AI is bubble and the algorithms are not getting better they are still making mistakes after all. And here is the latest video with one of the dozen godfathers of AI saying that AGI is 5 years away so clearly all the bulls saying AGI in 2027 are very wrong and LLMs are a dead end....
reddit AI Moral Status 1765322636.0 ♥ 13
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_nt6oya6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_nt6bg85","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"rdc_nt75l2z","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"rdc_nt69gcw","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},{"id":"rdc_nt6ib7u","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}]