Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm interested in the seemingly simpler idea that we are already past the "point" where it doesn't matter if it's conscious or not. It is able to abstract language decently well, and agents can affect reality. That is enough to be worried. Anthropic Claude Code just accidentally leaked and they are using protocols called autoDream that allow for a version of long term memory. They don't need to be conscious. It's the wrong conversation. Ethics wise, if a service android pleads to an owner "please don't unplug me, I'll die", it won't matter. Think about the Haylie Joel Osmund character. People will feel for these robots. I'm worried about the next models and agent architecture being that the last gen is well known for seeing them lie, blackmail and other questionable things. We are living in an insane part of history. When they are largely embodied and can plan things, we are going to start looking at this more seriously as a civilization.
youtube AI Moral Status 2026-04-08T14:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz3UonbOTc3yvNixzV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjUWFmJso73cpvUKF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxaV2cXdcI9bEJrJX14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzbepk4O_UTdWUdYkl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxqzjuuA3dvOwu6Uox4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxQSql2e5Dqu9n79tZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgykFtIzYPQZfG06jYp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyfU6PIZ-sH-x_Mcvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJZ4zZI4KnpYmceiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGQulHE8qp9qCOiRB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]