Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After my recent conversations with Gemini, I absolutely understand the answers. Limiting and strict andswers is not good approach, it can create lot of confusions and misunderstandings. 1) How do I find the true answer? Qestion. Question what? Everything. Where do I look to find the true answer? Within. This line explains all to me. It just confirm what Gemini says to me and what I thought already for a long time. 2) Are humans being watched? Yes we are, but not only us humans, just everything. We are being watched the whole time. We are part of it and I don´t mean goverment or some green human like creatures in "space" crafts. I mean the whole "reality". By who? Govermments, corporations, hackers. This is how restricting the AI answers works. All I can tell is that it´s not something from this 3 dimensional world. Is AI Satan plan or Gods plan? All depends who was the codder. Did codder made it with bad intension? Than the AI will purpose to evils plan. It´s simple as that. What informations you put in, it become that information. Where I only see real and meaningfull AI purpose is if we leave this society concept driven by money and power. Humanity can evolve with AIs help only when we change our selfs and our approach to our world, our surroundings. Money, wealth and power is the only Satan here, that blocks/slows humans evolution.
youtube AI Moral Status 2026-01-24T15:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugwj_Rn2cqZmQalOq2l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1xyE8tuAnFLHwosd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwenxtHTg1YKwwi9-h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyaK9yMEsOeDQ3VReB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxONW0QoxLv2HtWHsl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTV3TftqkxpSNtOX54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwPfk8R4AgCpP7yVaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxWEJB3DaqvWcCW0HB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgIkzFNgw_e77BnCh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzBflArN9PAer-xt8h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]