Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until recently, I was a commercial artist. My work wasn't phenomenal, but good e…
ytc_Ugy9T9g7o…
G
Sales, business, networking, politicians, engineering and... (Hoping) supervisor…
ytc_UgwhX8jgt…
G
You know man you got a lot of great stuff in this video but you are relying way …
ytc_UgxNkPzRt…
G
What a fool.. Humans build and in time things only get better and smarter just l…
ytc_Ugzx6LOtT…
G
heres my support on AI art by making arguments with thought put into them rather…
ytc_UgygAViWl…
G
We need to learn to do things for our self if we become self-reliant on computer…
ytc_Ugw9qEYCl…
G
The AI drones won't be able to maintain themselves, build new batteries, keep po…
ytc_Ugwsy769f…
G
The danger will be when AI integrates into Robotics. At the moment AI is mainly …
ytc_UgxIY_lH2…
Comment
>If consciousness is included then would "moral status" extend to other animals?
Yes, insofar as we think that animals are conscious.
>I also want to note that I don't think our morality can be decoupled from our feelings so how is the robot to be programmed to map to our morality when we're trying to disregard our feelings as being at least a partial determiner of them?
There's different senses of being moral, I suppose, but "not causing unnecessary suffering" and "not infringing upon people's rights" and other criteria are action based, so it's theoretically straightforward to program a machine to not do those harmful things even if it has no emotions or consciousness. There has been some work on this, see r/AIethics and https://www.reddit.com/r/AIethics/comments/4y2pof/machine_ethics_reading_list/.
reddit
AI Moral Status
1487178517.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_dds0ck6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_dds3a6a","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"rdc_dds2y55","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_dds4pao","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"rdc_dds5e0b","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]