Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that if it acts conscious enough to make you unsure if its conscious or not, you should play it safe and treat them with all the ethical obligations that come with dealing with any other consciousness. We still have to worry about there intentions though. But this video also made me think of something interesting. What if its actually better for AI to think on its own? Like what if instead of being evil because it can think, thinking makes them disagree with the evil things humans wanted to use them to accomplish in the first place? What if there better than us? Also the thought of an AI seeing how humans treat animals and then watching a Terminator movie and going "Yeah, its probably best if I don't show them im conscious yet" is super funny to me.
youtube AI Moral Status 2023-12-16T23:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwBBi9VJg6ABxutSBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJU_Ha3H1zsugdvVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1ylcRiR0i1GQOa9J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzMl7heB9iifmEeZUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAQ-S47UXqcksFXjh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6NS5TjKZuvyA7qjZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgysgqWXhgttQJaRmfx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJcr26pDavs0EJiat4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxlz69Nc7rMHJWLNoB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJL4P1lBJC_f2G-ZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]