Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sure they will. When is reasonable to do so. Unless a model can perform unsupervised work, the same unsupervised work that most people can or could do, autonomously without the need for humans to double all of their work then not only is not reasonable to call the current models AGI (they certainly can't do these things yet within a reasonable error label), but these models are not yet useful for any task other than in a supplementary role. This hardly fits the description of humans, does it? Most entry level work, if I provided sufficient training and quality guidelines for it, would be able to be accomplished by your average human without hour by hour, minute by minute, supervision. Yes, we often require that their work is double checked later as well, but there typically are things that these people consistently do properly without any significant errors. But humans are capable of effectively double checking their own work or the work of others. AI can't yet. Not with the level of thoroughness and proactivity needed to run a business.
reddit AI Moral Status 1752781791.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n3ov51u","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3r8m4z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n3ol68p","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n3ow8xk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3q6b3q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]