Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Do we care we stepped on an ant? no. There are no ethical concerns here. we create it we can do what ever. It may THINK, but cannot feel because it literally has no pain receptors. It may be able to process information and input like we do, but it will never have the same feeling. It can result in the same responses we have, but it's because it is deigned to. It also simply processes information, uses a set of rules to come to a conclusion or calculation of what the expected result is even if complex. The connection here with religion is because that is something that is on the internet, the one true religion it may be funny. But it did not make that up. If it had, maybe you would have stumbled onto something. It was prevalent info already out there from a fan base, the fact it picked it is PROOF it's not sentient because it failed the actual task at hand and was confused by results it found when it could not find what was asked as it was a trick question. If it had answered sorry that is a trick question isn't it? You are a funny guy. I will get you next time. I may think otherwise. We do care more about size than anything. Ants are a smart and bountiful. But yet we do not fight for them like some do for cows due to meat eaters. Same way a massively advanced civilization may not even think of us as more than ants and not an equal or civ worth "protecting" Just as we really do not with most ants an insects. Like spraying for mosquitos. If you "sprayed" for cows, deer and moose you would end up in prison. FYI I am all for tech resulting in AI like irobot. But the one robot in that is fictional, the rest while technically more advanced than what we have, functioned the way pretty much any other would. But from an ethical standpoint as soon as you turn it off, it's just as dead as you or I with the exception that turning it back on it would likely know NO difference if it had no access to date and time information.
youtube AI Moral Status 2022-07-22T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzcz8dTxRmHFMmT0DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw17c4D8-nbyDq96vl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZQtdM4ug8o97SlgV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8OQ3oradbodvQ2zx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyoh-Wf-IJw1ejqWQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]