Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That was my thought, too. It'd be pretty easy to train an algorithm in such a way that it will *probably* do something illegal while making it hard to prove that that was the intent. (eg. for hiring decisions, it takes actual work to make sure your algorithms aren't learning the same possibly-illegal biases that are present in whatever dataset you're training it with.)
reddit AI Moral Status 1524945618.0 ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dy4rwfz","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_dy4epou","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_dy4f5pg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"rdc_dy4pyyx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_dy4nl6o","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]