Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> is it basically just searching it's "database" for any mention of "here is a correlation that no one has discovered", so essentially looking for when a human has written something somewhere about it already? There's no database or mentions. LLM keep data stored as "token weights". Funniest thing is that starting weight values are just randomized and then adjusted during training. You could actually imagine talking with GPT as asking your question to a million random people, then receiving average answer compiled from all this millions different words that random people would say. So even if there's one very smart person would correctly answer your question, his voice would be lost in a crowd.
reddit AI Responsibility 1734382834.0 ♥ 120
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m2eswia","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m2crpi9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2e0rdk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2chafa","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2d500v","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"})