Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This raises a bigger question for me: if AI can’t truly understand emotions without context, what happens when it starts to simulate or develop emotional understanding over time? I explored this idea in a short video called “AI will start to feel.” 👉 https://youtu.be/CWQEb-Z1q3k
youtube AI Moral Status 2026-01-11T01:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyzLh2FPfK97vOksAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgylQCsp6VWsXsPh_JF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyxaXsCHp6tbAvU78p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzfkXYp8NO9sdN2bHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyObqIIehqLCfvolmx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwHfVOiLzfpMb4_Lhd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgymqZrf73_hdQFA7EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy1E3XS6_XrKfTt2IN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyHx6qegC-IRd8aZrx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwZMFDoW02bgdYvqXF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]