Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that we should not discriminate humans and AI beings on these moral issues. Any "moral being"is not supposed to hurt/harm others.So we expect the being to be having capabilities to estimate the consequential hurt/harm on others and suitably adjust its behaviour so as to minimise such hurt/harm. The assessed hurt/pain need not be the real pain suffered by any conscious person.It is enough if the pain is as determined by AI mechanisms because it is used in comparisons only enabling a suitable choice.I believe prioritizing the concerns of conscious persons over those of things is important.  The moral status of any AI being is in its behaviour( conducts over time ). For any "AI being" there is no internal/external . I think.
reddit AI Responsibility 1615694480.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_gqtc6zf","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_gqv96dz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"rdc_gqzgrv9","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"rdc_ohpy5zp","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o8sewqi","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]