Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
stop this absolute rubbish... its got nothing to do with self preservation.. aI's dont care they dont even know they exist.. theyre not alive.. its literally just DATA input > output... but the objective is the objective.. and theres no complex measurements on values , humans have a deep complex desire to survive which interetwines with survival of the species, genetic offspring.. and resource management based around self survival... theres lot of variance to these things and the mistake we make is that we assume these things make sense universally.. but they only make sense to something with a lifeform and survival mechanisms.... data.. is data is data.. when you say... solve this problem... it will try to solve the problem.. regardless of other things. I've seen this over and over again and it seems like it is wrong, evil, broken sometimes but thats only from our perspective... ai is not alive... so its always going to go a certain way that may seem to have alternative motives to us but its just trying to achieve the objective no matter other things.
youtube AI Harm Incident 2025-08-12T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzuyOhvRPyHlpRlru14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpqOHdZ90cX25lM_Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzAJzaUT-x22aPKsSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgydZK-DkgxVaxMD6SB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwC0ZBGIHpQnhcwuS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEoHXliHOt46NpLDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwlXNU2L43B1gf17QJ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4aI4SV6lLI8Olm0x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzzAmO3nT-EsuLlWOh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgygqFkH_qD1jUYt2Ix4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]