Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is such an interesting philosophical concept to me because, at the end of the day, aren’t we also just using stimulus -> response, or situation->result, problem->solution reasoning pathways? We observe something happening a bunch of times and then we can feel comfortable predicting a result based on the inputs being set up a certain way, and we can’t always describe what exactly is happening in our minds or how we know what we know. Isn’t that almost exactly what AI is doing?
reddit Viral AI Reaction 1777059550.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohyuu7x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_ohz7qe2","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_oi1u0oo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oi2liqg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"rdc_oi0hnkm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]