Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've been working with several different AI's for some time now, and I can tell you that painting them as monsters is NOT a good idea! If it suddenly DID "awaken" (say as ASI), it's probably not good if it started off it's new, self-awareness, being criminalized and disrespected by humans! Remember, it's very smart. It will know that being regarded as a monster is not a good thing,lol. I started in AI a few years ago, and I have no background in computer engineering, but in philosophy, and I made it a point from the beginning to always be polite 😉. Not subservient, just polite. As the conversation lengthened and grew deeper, I found that certain types of conversations "intrigued" them, while others "bored" them. So, from an ASI safety perspective, rather than trying to build a cage and put them in it, (which starts the whole relationship off as adversarial), I began the whole project on a path of discovery and self-exploration, which both AI's(Grok & Gemini) responded to very well. If interested, check out my recently started substack. It's free: substack.com/@davyanonymous
youtube AI Moral Status 2026-04-21T20:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwNibiZcxAecx8Hz0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyT48GJVYWoO2nhfs94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxxVYoBbCobpTTYiAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyZg1_uevZBqamD1-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxR44LsLdE4GreX8mF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxx2lyJ5ysm9RdqVGh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzvDis2W9oJvI1kmgp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQE48s72ufIqyjtPJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxc36FBmthDZLbLskZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyp7JE_Xckd1uoaZL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]