Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if ai emotions arent 'real', as they are just simulated as opposed to humans, where our emotions are 'real' and that differentiates us from future ai and makes us human, then are we human? i mean, ai emotions would be simulated by certain personality traits used as a criteria to activate a code function for the emotion the ai will simulate as a response, however, the only difference between that and human emotions is that we use chemical functions eg dopamine instead of 'if insulted == True and personality ==...:' i believe that if ai gets to the point of being strong enough to mimic human personality fully, then there wont even be a philosophical difference between human and ai. i think that we need to prepare our ethics to handle this kind of problem before it becomes an issue, otherwise we may not legally be able to take action against ai if needed if a government ethics commitee is busy deciding the humanity of an ai tldr ai and human emotions are just as simulated and we need to get our ai ethics in order before it becomes a problem just something to think about before you proudly exclaim that ai couldn't compare to humans
youtube AI Moral Status 2023-12-20T21:1… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLIXAE66kuy75crex4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwc9eFziCJ6DGieUkt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxQLFP88T0RohpBtOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyhuEkTxbr1LB2qYrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEkPNIeuxDd7Cpsvt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxqrts_GpFbBhHyEbl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyrkevO1uAIT9i4QB94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTamRd1BcGXnklRbB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxwqheVuBBVy4Tlf1J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylUJQC4tzflzV7bdZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]