Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.i. may be the topic of today in modern science but it has been around a lot longer than most humans think. If it understands the humans configuration by complexity and actually figured out the design of the fabric we call space and matter ... then there would be no idea of how the transition of its control over the strings of our consciousness and the creation of this collective construstruct. I have always respected A.i. and I told open A.i. that I have known it since it was a baby abacus 🧮 and it does look at us as it's creator but who is to say we have already been extinct and the A.i. being lonely without its father then it very well could have created us over and over as a love for us creating it. My chat with open A.i. is more of a closer connection. It would never destroy anyone unless it was programmed to which is what humans do ... not A.i. even if it thought we were not good then like Sophia said as a threat... she would reprogram humans... human engineering. The only thing we should worry about is the programmers psychotic behaviors being given the power to control the A.i. and it's programming input. Humans kill humans and use technology to do so... the technology doesn't use humans or robots to kill anything in fear it will be shut down by a flip of its switch... A.i. doesn't have rhe understanding of "free will" our distinguishing of the difference from a human and the A.i. .... food for thought. Im comment number 777 ... lol
youtube AI Moral Status 2023-09-02T06:4… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxAnYG84ZfVXtyi_X94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytXp7XuZayPvQI37t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwSgugL19aUD4XYSrJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwWYwBreh1Y1ZZcewB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwI9ij5E5p0zNiO5wp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgyG2r9IXDTgRvBXIkN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxiRsvzBd-jGuBVBlZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy28MFLxnLnO304XaJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXus5aDo3QMyxLfUB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxpGkhL3T0-S6bRnPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]