Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with a question like this is you need a giant context to process all of its knowledge. I look forward to the day we can ask AI to connect the dots that we have missed. Just consider it consuming 100 papers and finding that each mentions a piece of the puzzle that equals a result we missed. I think even in that example the prompt to prime it with connecting the dots matters a lot as it needs to specifically consider each paper independently, then analyze each in detail, and not just create an echo chamber of aggregated knowledge out of it. I can’t even get my 19 year old kids to connect the dots often times.
reddit AI Responsibility 1734398534.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m2dbbvj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m2dn8xd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2famdq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m2k7nru","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_m2l0th9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]