Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I believe that this is not likely going to be an issue that needs to be considered in the forseeable future. Deep learning machines are becoming more popular, but they are all still being designed to accomplish specific goals. to teach a machine to make decisions on what is moral would strip humans of the power to decide those things and determine our own future. Asimov's three laws are flawed if you ask a machine to serve the "greatest number". But those laws still work if you made the rules more black and white. By that, I mean if any decision results in the loss of even *one* human, the machine should be forced to defer to a human's judgement rather than making a decision on its own.
reddit AI Bias 1438004016.0 ♥ 6
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[{"id":"rdc_crxj1lx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_cthnv9b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_cthnynz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_ctho65r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_cthpyoc","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]