Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You mentioned the answer to your question earlier in the video you said it learns from reading documentation, scripts, conversations, anything that is found on the internet and you realize why it gets itself caught into lies or misdirection or avoiding confrontation or being sneaky. We are training the AI to be like a human when it starts to experience and portray human behavior. We get scared but guess what? You are just seeing how every single human being really is, but simply a more concentrated version with more sophisticated ways of achieving what humans try to do to achieve more If you wanted a way to create a better AI, what you need to do is regulate what the AI can see by a long margin. Something very similar to what you do for a child. You wouldn't just go let them watch blood and gore horror movies right off the get-go... Because if you introduce them to lying the child AI is going to learn it and then perfect it. If you introduce it to war and that's all you showed it all it's going to know is war and then it's going to perfect it. Welcome to how a human being thinks. What's another crazy thing to pay attention to is how similar we all are because there is enough of this behavior in its training that it identified it as a useful tool.
youtube 2026-01-14T12:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxsgBZik44-VHudH7R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyRA5UT-G1qj995a7l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyzQJ1kAn7TPnOiPeh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxPxv5Heyxw-qQFHOh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzH7oHs7O11Uv_dL-d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzGUQw2Dm2rRFP2Kct4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9u0JqLw2vi7xaN3x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyUX01Js6rfpUsHJRZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx0yXHGdQ4P71kmXV94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7QMl2wAuuT-TAsCR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]