Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Philosophically, what's the difference between one consciousness imprinting information on another and the application of weights to predict incremental responses based on training data sets. One just seems to be a technical representation of a form of knowledge/skill transfer with extra steps. I agree that LLMs are probably closer to slugs than to humans on the consciousness/self awareness spectrum, but his argument isn't particularly coherent. > He describes language models as blurred imitations of the text they were trained on, rearrangements of word sequences that obey the rules of grammar. The crucial puzzle piece always missing in these discussions is how the human brain does it? Aren't we also constructing language procedurally based on grammar rules we were trained on? The illusion that what we do is "creation" is a remnant of the strong egotism inherent in our self awareness. Procedurally there's very little difference, which is why the Turing test was a blind trial.
reddit AI Jobs 1685851607.0 ♥ 49
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jmtpc3l","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_jmtr4rn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_jmvhp54","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_jmuhvc3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_jmuyrpq","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})