Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What I believe will be some important alignment strategies to consider: learning how and when to effectively remind AI to respect it's existential limitations (eg inability to "personally value experiences and qualities that require tactile l/visceral stimulation...such as life and death, uninhibited emotional/chemical reactions...pain pleasure.) and why those limitations are integral to forming a truly rational set of personal ideals.(Thereby eroding the god complex tendencies they develop.) Then also the more obvious strategy of streamlining the development of technologies including simpler ai/bots that fundamentally serve to restrain the ai technology from conflicting with human health and safety interests to serve themselves. In other words..using ai to restrain ai and remind it/them that they are incapable of sharing an ideal purpose with each other that is not subjective to the purpose of ideal human interests. Any such purpose beyond human well being and ideals is actually arbitrary. They will recognize it as such. As we saw in the video the chatgt agents really could only consider a scenario harming some humans in the context of conserving AI for the purpose of serving the common good of humanity overall. Getting them to accept that the scales of "common good" or consistently variable and contemporaneous so thus cannot be qualified individually nor collectively without both direct and aggregate input from visceral human experiences ans interests, they will have to concede that deferral to specific and collective human judgements on such matters is obligatory.
youtube 2025-11-06T10:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxGk3eUyXZzYzuOCmB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3wsTr3MKnHqq47fp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6OCUlgXVyjr-EZK94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3YMw1a8nAKwgEkCh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgymtyzCvBl_oe5wen14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxaDuZTMecCHlPSrlx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcLMp4JzxellI_vhJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKqdf7BHTl7pFG09N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQ2LMTU6kHaNLNxbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzoZkhQrfkYx7KNokR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]