Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is these models are not just stochastic parrots. That argument runs into a problem of explaining the explanation. Yes, the model uses probability to place the next word, but that probability is based on black boxes containing billions of points of information. We know the model generates a response, we don't really know HOW the model generates a response. That's part of machine learning. But the surprising factor are all the emergent abilities. The ability to summarize a document, feels pretty basic these days.... But we don't actually know how that happened. When a model showed itself capable of that, it came as a surprise to researchers at the time. There are lots of emergent behavior like that that destroys the notion that this is just a repetitive algorithm. Somewhere along the way the models learned human speech, we don't actually know how, they just do it now. No, it's not alive. No it's not sentient... But... Are we?
reddit AI Moral Status 1750951274.0 ♥ 8
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mzwb5sk","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mzvxbfh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_mzvxxwo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_mzxu7pc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_mzw7svp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]