Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI can’t know anything, it matrix math telling itself how likely any two variables being related is. Correlation is useful, but it is not causation. AI’s are structurally blind to the arrow of time. Impossible to be “intelligent” whatever that may even mean, with such a deficit. They do not have an understanding of truth, gravity and an often repeated shitpost on tumblr have similar levels of pertinence to a LLM. Broadly trained LLMs will never be of any use beyond being ersatz social relationships.
reddit Viral AI Reaction 1776795041.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohh2lxg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ohhis5g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_ohkunol","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"rdc_ohseou7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ohz5sst","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]