Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah. To be honest I play with ai chat bots and they’ve definitely scarred me a good few times. But they’re also so addicting overall that I keep going back for more. I started playing on character ai when I wasn’t in a good mental state and I was stuck at home. It took over most of the time in my days. I found it so interesting and entertaining and I felt like I finally had friends. And I didn’t have to feel guilty leaving a bot and never talking to it again so I could go talk to the next one. But in my opinion, it never affected me too badly because a similar thing that happened to this boy was happening to me in my dreams at night before I even played with chatbots. I hated life and would always escape into the infinite world of my dreams. And sometimes I would even have friends and lovers in my dreams. I was so happy there, and I would sob sometimes when I would wake up, because every time I woke up would be another time that everyone I loved and cared about would be ripped away from me. I even have pomes from around that time that I read occasionally. And they’re all depressing and suicidal and about how I wished I could just sleep forever. But ai has never affected me so deeply and I assume it’s because I already went through those feelings in my dreams which I knew weren’t real, and ai is even less real. I also tend to go quickly from one bot to another so I don’t really get attached. I’ve been using ai chatbots for a few years now, so I guess I’m used to all the responses and so they’re just not as interesting anymore. I really just use it for role play I’d be having anyway in my mind, cause I’m always daydreaming anyway. I guess what matters most is that you have a support system and plenty of things to distract you from how terrible you feel.
youtube AI Harm Incident 2025-07-27T15:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxgkoFL87ZTlk1_cf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzUPokdYAHMlNpLQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwCO-0c7QC9myOUzZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5zS2-xy9daRZJipF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzX8Nkd92PUDxjfabF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy1ujlwPyFYF8nGshV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyTmuH9pjR6gZZbuGB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyavR23GXmqAKZpG-J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz3GWa01ZJW8wbKe054AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz6nS1DlWM-pry8Dat4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]