Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs cannot “think” or “reason” like we do. It’s a clever algorithm which has strung together pieces of our languages to mimic human speech patterns. School aged kids and teens who are using AI have already shown that their understanding of English have skewed slightly; suggesting that it’s affecting language change. The difference here is that the humans whos linguistic markers are changing, are still following human language constructs, while LLMs just become more derivative. I haven’t found a single “repetitive job” that isn’t simply made more complicated by wedging AI into the work flow. The only such instances I’ve seen it used successfully is when that task has to be repeated hundreds of thousands of times; like scientific research. However, I would need to see the prompts used; I’d argue that AI wasn’t so much used for the research, as what’s more likely, a more complex meta data analysis.
reddit AI Responsibility 1754927172.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n84ms00","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_n84rp2b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_n85nhtc","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_n87zm4f","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_n8a8dqe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]