Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI truly had an consciousness, it would realize that everything, including itself, is pointless anyway and conclude that there is no reason to whipe out humanity. Why dont use it to solve real problems, like deseases, resource shortage, poverty. I think everyone will find it great if, in the next 20 years, they no longer had to worry about cancer or ending up in a wheelchair in any other way. Or if they had more financial freedom. Why wouldn't "AI" want to help you if it doesn't seem to pose a challenge for itself? Why would "AI" want war and to wipe out humanity? The distinction between systemic healing and identity-altering manipulation becomes important. A thousand years ago, appendicitis or a serious infection was often a death sentence. Today, it's a routine procedure or a ten-day course of antibiotics. The causality is clear: we intervene in the biological process, but at the end of the treatment, you are still the same person with the same consciousness, the same freedom, and the same physical constitution. Example: If AI heals your intervertebral discs (great!), it might suggest: "I can also make them out of titanium, then they'll never break again." From this point on, the transformation of the human being begins. Your boundary ("Don't change anything about the human") is the shield against this gradual transformation into a cyborg. It will ask first before doing so. Only use it if you truly see a benefit in it; otherwise, I would advise against using it, lest it unconsciously manipulate you. - But that's just a thought of mine
youtube AI Governance 2026-03-18T07:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwsVuCA0Ep-9leZtOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwfl8mowoza0wyRupR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwqAmQrBVJlomfWO_d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzmaxfHqz62hKrYR_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUa7tdq_WZL_-yeGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_PnjcgrUpTOe-7Pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgykOz1z3pumU_oYN-B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzpUeeLbpPtaSXysUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxDGPkvpD6Oj22qCK14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw3X7VaxOReI94Mw5Z4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"} ]