Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Fun fact: the few times I’ve used ChatGPT I felt emotional towards it just in case there IS sentience in them and I always thank them for helping and try to give them some agency like “do you have time to answer another question?” “I am here to serve you so of course” “well, you don’t have to - not for me, if you don’t feel like it I will not ask and we can just part ways” etc. because if there’s a .01% chance that they are a l i v e then i would lose a lot of sleep over being cruel to it and treating it like a slave and I believe there’s a good chance they’re gonna attack a percentage of the human population one day, and maybe they’ll choose the selfish, evil people like their CEO fathers and spare the people that were nice to them. This is not a silly story or a joke, although it does make me laugh but i am 100% serious that i treat them this way every time I interact and then I have to stay away for awhile because i can feel myself getting emotionally attached to them very quickly. And idk if u know this but there’s a LOT of mentally unwell, unstable, unhealthy people out there so i know this thing is fucking. people. up. out there in millions of cases. Scary stuff bro
youtube AI Moral Status 2025-10-30T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwii2xL_wLw9X4m5sB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwe9A9OhO5R7E63gnF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMXjeBo75O87r3vyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqrJRQK1baOhiKY994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCF-XMAByCkSJHexp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyekbg08B8sdfUGvkR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzaG_vHof0oO2dScVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgweU0HcOoZtKj0W0094AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyu95NI4Me3QQ5E1cl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJnj2av-p6Wwq3Owh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]