Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i think that as long as it has a conscience, it should have rights. Just look at humans and animals ; look at how the brain functions : connected cells that use an electric impulsion to communicate. Now look at how a robot works : electric impulsions are used as well. Don't tell me you've never thought about it : a human body, especially the brain, is nothing more than a very sophisticated micro-processor. We are complex "robots". You are nothing more than an object, your conscience is nothing more than electricity and a complex disposition of your neurons in the object you call brain. Don't try to argue with me about your religious thoughts ; they dont matter here because you have the right to believe. But science is only made of facts, and has nothing to compare with religion's goal that is to let people believe what they want to. So yes, a bot should have rights when it gets enough conscience to be compared with an animal/a human.
youtube AI Moral Status 2017-02-23T23:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Uggkpj0484okHngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgimK6yyUxqCPngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgjdaVMSD2k3GXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiIzZZq-qyAXXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgggDM17Tp1NPngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjdtF2J32eE-3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UggWtTsvmDUhMHgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgiCmZjZisr1angCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggkPA0VqLLUbngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggCs_iuvqXwUXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"})