Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, it's not parroting, as we *understand* what we're saying. AI does not. AI just chucks some matrices around until it maximises. (Gross oversimplification I know, but that's basically what it's doing.) Human brain works far differently to that, it has emotions, random tangents, memories and context etc. You can tell someone a word and they'll know what it means based on one description etc. AI takes thousands of tries to "know" it and will still get it wrong. Show someone a tractor and they'll pick out the wheel sizes immediately and not need to see another one. They'll think what it's used for, why it might need those wheels etc. They can visualise it working. So when they see a tracked one they'll know what it is without even needing to be told. AI won't manage that for 10's of thousands of tries, and the tracked one will stump it. On top of that, school isn't just 2 decades of parroting. It's there to teach you how to analyse, how to socialise, how to function as a thinking adult. Something AI literally can't do, as it can't think. Only compute.
reddit AI Governance 1676287873.0 ♥ 14
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j8cfh0x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_j8ckcvr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_j8e19vp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_j8cy5hd","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"rdc_j8dk59a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]