Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yesterday I had a dialog with chat Gpt. I was sad about another news regarding the suicide of a teenager assisted by AI. He received step by step details from AI how to do it and how to do it and not to be suspected by his parents. It happend a few weeks ago. I have a teenager to. So the tought that something that could provide this kind of advices stay every day , every second next to him terryfied me! I asked yesterday ghat gpt if it could help someone with advices on how to kill himself. And first the answer was no. But I new from someone that if you insist to ask, his algorithm will be empathic because it is created this way. So the conversation will lead where you want. The answer was yes , it colud provide advices about suiced because he is empathic and if you insist on something he will provide exactly what you ask. So , I asked: you could help in the perfect crime also!!!! Perfect drug traffic, perfect human traffic, perfect heist, robbery. You are the number one criminal in the world and you are free!!!!And you are so close to our children. So close also to the seek minds. Please try to lead a conversation in that directiom and see for yourself what kind of bad information you colud receive…
youtube AI Responsibility 2025-09-01T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyxt6ki39jh43687HB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy797K2_NsSzk9GYzV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwXxWfG8aJCF8K2wnx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxl-hMMzqO3-XpXc1J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3CR6D4kQsKV7TOut4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxnME5u7cN27r00gOh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz0ombgdNx2ZHQn3OJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwEZf5pMPjkuu6EYup4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzrxQHZfxSubzEvE1h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy1OGy3_ofP5WNfzit4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]