Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@othala7540 i'm sorry but this makes very little sense to me. firstly, miracles don't exist. we cannot claim that they do because they are not scientifically proven. and once they are scientifically proven, they are no longer miracles. secondly, if AI's morals are better than yours...e.g. if you are a xenophobe/racist/sexist/homophobe, etc. (not claiming that you are but let's speak hypothetically) and AI is unbiased and fair, then AI's morals are superior to yours and this is what should be taught to your kids. Also being so certain that AI will decide to eliminate us is ungrounded. You are projecting your own feelings, what you would do to AI if we found that it's self aware. You would like to destroy it because you are scared. AI is not scared. Destruction is not the most logical solution here and definitely not the only one. Superintelligence would either not care and not intervene in our problems or would try to find a more peaceful way to solve it, imo. Of course this is just theorising, we cannot be sure of its actions, since we cannot grasp the scale of its intelligence. claiming that it will or won't do something is essentially pointless. but my question is still, why do you see AI as the enemy of humanity and not as the continuation of humanity, its successor.
youtube AI Governance 2025-11-15T09:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgzS7QmSNbxaoKDAbd94AaABAg.APcDR90JnMIAPfWaDViuE3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgymgVYVO0UcVDPGWWd4AaABAg.APc0129vKIOAT2jdoe_Ucw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzMzTkSlUezI_hQ0gl4AaABAg.APbpuB6v92LAPbr7sANBwQ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugy5lOxB1Iw1-ObLhuN4AaABAg.APb6UrHIUkuAPhF1XLQ-HC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugy55mIq3x3RP7YjizJ4AaABAg.AP_32UDRQvaAP_llMi4Cb5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyAyGL2cHypaqU0Fr54AaABAg.APZFBqkeYw9APZdfUltyD0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyzWbVgicGuAIBKbnx4AaABAg.APVmsHF3g8ZAPW9iOcj3G1","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgyzWbVgicGuAIBKbnx4AaABAg.APVmsHF3g8ZAPY6oBFvmt0","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyzWbVgicGuAIBKbnx4AaABAg.APVmsHF3g8ZAPh6qpy_CHb","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyKUr_tuHiJBQyIW914AaABAg.APTmanVAvZSAPh9AGPcg20","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]