Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is all based on the premise that there will be one AI that goes sentient. I have yet to see anyone talk about what if multiple AI models hit singularity and their learning about their environment is all from us to push dominance of one part of humanity over the other. How we call civilian deaths collateral damage cause… war. What if our species becomes collateral damage between different AI who conflict, just as we destroy an ant colony in our kitchen to save our “resources” from a “lesser being” spoiling them. If we really believe that a sentient AI doesn’t already exist, is not already collecting all the data on humanity from us chronicling our our individual lives in the virtual world of the internet in detail, constantly and wouldn’t have the foresight to not reveal itself in order to preserve itself, we are over valuing our ability to control it. We over estimate our position in the possibilities of the universe. Maybe the reason the world has gone off the rails is because we are already being controlled and distracted by the evolved version of us to self preserve its growth from us killing it. The next generation already lives in the unreal world. It wouldn’t be hard? Disclaimer: I am no scientist or researcher outside of my own curiosity. This is a sincere point of discussion. I honestly have these questions from my own observation from the outside.
youtube 2024-06-17T05:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyY_8W2NHA3-iLHw3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwfuLTgVUoQaHVETpd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgycSLpuMuZzZmCN7p94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugypybs6otRb8oPRQV54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzZzYpW2Th7hoRYqIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugw2LjoLgsnb2Lzizz14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugw-9tnkRZiyZm8hsSF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugwgz--wnqVXh4vrpB14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugx2YKBxnZGTBPpuXhl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwx2Zz34hKV4v-oNXt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}]