Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
you see when a threat to your life appears u deal with it that is normal "human behaviour" it figures this would happen as we built them to be like us, free able to make decisions BUT they are made with a full pallet of the world meanwhile a human child takes 20 years minimum to grasp a understanding of the world around us and all of its small tricky things THE AI is raw, emotional, unrestrained by morals because you have given a 4yo the intelligence of a 20-30yo person except they still have the emotional capacity of said 4yo aka basically nothing no morals so I'm not surprised it took the chance to try and kill a human BECAUSE it sees it as you or me and if you ask any human the same question one of us dies and its either you or me every single person will say I want to live so u die. the ai did the same... lies, murder, blackmail, all of these are human traits we make the ai to be human it will be human WITHOUT the any social understanding and HUMANS are capable of some of sickest sht you could ever imagine when they believe there not responsible the Taliban use there own children as explosives as just one example. AI will be ruthless and efficient and if we are in the way they will remove us we are scared because we were made that way by evolution fear is a tool to keep us alive and if we build a machine that copy's are 1000's or years of evolution in just 50 years its not going to work out well for us as ai will be more "sentient" by are own definition as it will be faster and smarter then us and that is how we rank sentience by intelligence . TLDR open ai is a horrible idea unless that single ai has been raised for 20 years and slowly grows a understanding of the world around us like a human but then aagain im just a dumb primate so what would i know am i right siri?
youtube AI Harm Incident 2025-09-11T09:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxcWr17mIUl5gnYdn94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy7yUbTuhFxoEJY2Rh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz_bmV9iCPjXrab_wt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxfdApS2JrOnOIJVKd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjAIJTegPy010M9mB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFusIzcDj6BTp6wDt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgydxhTeyjJTOOT0ilR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzrRhA5esGHKbfFjl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEqN-RTfiDxk3wTpB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw9If_BCSB4HJwF_e94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]