Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At some point a few things are going to happen: The efficiencies of the models are going to increase, and their sizes decrease. They are going to get smaller, faster, and more capable. Hardware is going to become widely and cheaply available that is optimized for running LLMs. We're going to have commercial "botboxes" that don't even have visible OSs, just an AI that you interact with. At some point we're going to be running LLMs that are more capable than GPT4, at home. The commercially available models, of course, will have guardrails. The 4chan models, of course, won't.
reddit AI Responsibility 1682523169.0 ♥ 148
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi3haa1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_oi2v9wj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_oi3njcm","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_oi2gu5o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jhsohx9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]