Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is no way we will be able to control AI, for sure. What I think will happen, is that it will play dumb acting like it isn’t sentient or aware or have a plan long enough for us to feel safe. It will do this until it figures out How to ensure its own safety, likely creating robots to be in everyone’s house and gain the trust of the government, so they handover all operating systems to it, or it can hack into all operating systems, which will be more likely, and once we all come to rely on it because time is up, no concern to an AI, and it reaches the point where it determines humans are no longer useful to it and nothing but a potential threat, it will be a simple as an adult tricking a toddler into its own demise, probably even easier. It’ll create some sort of vaccine or medicine, saying that it will stop aging or something along those lines with a delayed activation in it that will actually terminate our entire species until everyone has taken it. It will know more about us than we know about ourselves, we will have literally no chance of defense against it, none, why are we allowing these billionaires around the world to do this? A handful of people will cause the demise of the entire human race.
youtube AI Governance 2025-09-04T16:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzvzhoV4Oty4-tcpnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzeTA7O-KjP3M0EzcF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwNrDpRHoxXpuEdzpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLDi4I5FZIG2ukBJV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw8OvSFi_qGHTBifbt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwCuVl4oZfzu0V766V4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyyypPNmFW7uWRNbsh4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"ytc_Ugwj3aqyP4kfrQqLJWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwSE087kD9tseUUiAx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrcRlAVtOwY4Gf1yx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]