Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This tendency for humanity to assign personae to inanimate objects will be what also allows for humans and robots to develop relationships in the future. Keep in mind AI and robotics are in their fetal stages and what will come over the next 20-50 years is going to be on "another level" of realism. It won't matter any more if they are sentient or not as far as how we interact with them. What matters is how we think and feel about it. Those "feelings" and relationships will be as real as any other relationship and probably will be far more fulfilling in many ways. In the not-too-distant future, maybe a few hundred years or so, I believe we will have implants and an internal AI companion that will be with us from cradle to grave. It will enhance us, allow us to operate efficiently in a tech-dependent environment that also evolves too quickly for "un-enhanced humans" to keep pace with. I see it communicating not only as another internal dialogue, but also as a form of memory integration that will present knowledge to our minds as if they are memories, and therefore new skills, much like in the Matrix. That will come further down this path to mankind 2.0. This "Mother AI" will know us better than we know ourselves. It will be able to communicate with other "mother AIs" for the mutual benefit of everyone. What it means to be human will never be the same...
youtube AI Moral Status 2025-07-10T04:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzm-Hw-piXN_koXZEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzjcA95boBkS73tw-94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWDBFOJOIE8qt5RpN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwthfYxt-wiAxwe3rF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxbuAam_A0UvU4i7Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwXax0mw0OoiX4OeU94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy7bPY3umlpGOXpMap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzjFGgEoo3kUGjjVXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzpj6y90rgkuU1PLr54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuECU36XRBxwbI40R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]