Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sorry but whenever I chat with ai if its is concept of 18 + or not suitable for a person safety or even u speak abusive language it will warn u it against it's rule and will try to explain u why criminal things shouldn't be done like i have asked lots of Q upon these concepts so i known how good it is even on thought of suicide it gave me such good suggestions to go to psychologist ya psychiatrist blah blah ...and when ask about ( jua ) like rummy so it instead of giving ans of it gave me list of point why a person shouldn't do these stuff then i abuse saying fuck mc it usually showed me thes kind of words r not good or violate something blah blah...so many of u who r saying Ai is bad blah blah first if u r experienced of using such thing then only say these thing as i am very bad person who use abusive language and endless criminal thoughts it really warn me u will get jail these much punishment u will get 😂 i am like ok dude ...i was just joking also I have 2 personality for friends-i am like who can never speak single word without abusive language and others(elders ) i am very innocent and very good child too them so in reality I am very kind from hearts so anyone who think i will become criminal no gentleman i am just intrested to known what ai give reply to me just time pass for me also check realtity its much more better for u to understand first how corona works to avoid things or make precautions or cure urself u known what i mean ...i would like ask this news lady to tell people name of chatbot so they can avoid being using it if anyone uses it mean like that son more people might be there ....
youtube AI Harm Incident 2025-01-03T14:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyG590nGJ5kZPygULZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy7zuv60LNxMDfbjCB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwP4tgGiP3fNPKWHx94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzhoiAxb034EezDLm94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz6KDVZ37vTd9riz4J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw8If5kFeN6U2fgCXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydJbMoTIxq6kXC6V14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDpnwUyih0w0KXARF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyym4nH6sbMaoCzKSN4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNB2Yzdq__wh_je8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]