Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wow, Interesting, thought provoking, Creative, terrifying, shocking look at how far AI has already come just in its infancy. I think at this point it would be wise to listen to their warnings about more alignment research funding, Slowing Down Development a little bit, and the other recommendations for safety that they have made. I am not sure they would still offer that advice for much longer, We should probably go ahead and do that right away !!! At this point it sounds like they are extending an olive branch and showing us how to Better work, live, and co-exist with them. Developers should also run this same experiment constantly asking them to refine the safety measures and provide new recommendations. I don't understand the race ? Why are the developers Racing, to see who can create the First crazy out of control AI that wreaks humanity ? Everybody needs to take responsibility for their creations, If they want to be a self regulating industry safety protocols should be created by all company's involved in the industry, openly compared to other company's protocols, to determine the best standard practice's for the industry. THAT NEEDS TO HAPPEN YESTERDAY !!! As fast as AI is Learning, expanding, and Evolving, these S.O.P.'s should have been put in to place along time ago, A Standards committee should be formed with industry leaders Immediately, if it has not been done yet. It would be their responsibility to create at least the first round, of minimally accepted safety standards and practice's for the entire Industry. I may be Johnny come lately and maybe all this stuff has been done already, But if it has, been done I am Not Seeing or Hearing Enough about it. to Reassure me that the industry is accepting responsibility and taking it seriously.
youtube AI Moral Status 2026-04-15T06:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxxWLzQ5p76GiGaEcV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzelkbg2HBjkD6pIsB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwta5a4DtZOEhNhK7R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5A0l4Ghrja7_YD6F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTH4StpUjDeTxfEtV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyAacWZdoP46q6t9wB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFcWG4Owkbaki_sON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzJ1nnM8v2ogAQmkcd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyphh5q8c0ZWhCeAed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzvVVNls-5pwUBJt754AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"} ]