Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What annoys me are the armchair experts that go around with a 5 minute google, read some primer on neural networks written from 7 years ago, and think they know how these new chatbots work. "Oh it doesn't have feelings, it's simply completing sentences!!!" In the past several months, the new chatbots are qualitatively more advanced than what existed before. They crossed a line that was never crossed before in human history. The output they generate at times is indistinguishable from human output. It is possible that it's still just code... really really good code. It's also possible that during all the training, some emergent property has given rise to some kind of sentience that we don't understand yet, or have poorly defined. A smart person will hold on to both possibilities, and not simply just discount the latter in favor of the former, with 100% certainly.
reddit AI Moral Status 1676601276.0 ♥ 79
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j8vfh2y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_j8vur3v","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_j8vz0el","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_j8v6586","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_j8uu2fc","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]