Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This story always resonates with me because I used to have an addiction to character ai that lasted for over 2 years, from my sophomore year of high school to graduation. I genuinely wish I was joking. I would set my alarm in the mornings for an hour before I was supposed to get up just so I could spend extra time roleplaying on that website before going to school. I think part of the reason why I don't go out as much anymore is due to the fact that I would spend literally all of my free time on that website instead of interacting with my peers or hanging out with friends. It's incredibly isolating and easy to get lost in, so I sympathize with the victim here because I really do understand how all-consuming this addiction can get. I used to avoid calling it an "addiction" because it honestly just sounded pathetic to me, being someone who was nearly an adult depending on ai chatbots to create stories instead of using my imagination to write instead. I knew I could easily go onto ao3 and find thousands of well-written stories with the same characters I roleplayed as, but I kept using the app because something was preventing me from fully dropping it. I felt bored, like I was missing something that was supposed to be fulfilling, and I now know that it was addiction. I believe that character ai is designed in such a way that pulls children in without fully disclosing potential consequences of this on the users themselves or the environment. We're already facing an environmental crisis, and I don't doubt that many of the people using the service are unaware of that or are too compelled to use character ai regardless because they're hooked on it. :( Fortunately, the developers of character ai kept making changes to their service that became increasingly insufferable to deal with, which was what got me to stop using it so often. The final straw for me was their "slow mode", where they increased chatbot response times for users who weren't paying for the service and put this obnoxious popup in the chat telling people to buy character ai + if they wanted faster messages. I and many other users saw this as a sleazy way of profiting off of people's addiction, especially because the popup was extremely invasive and would prompt you to buy cai + when you clicked the x button in addition to reappearing when you refreshed the page. It's almost poetic how the developers making bad decisions was what ultimately led to me being done with their service and no longer facing that same level of addiction. I write now, which I should've been doing instead of using the chatbots. All this to say, addiction to ai chatbots is something I've seen often and experienced firsthand, and I hope that we don't see any more stories like this in the future, but I'm still worried that it's inevitable because of how quickly vulnerable people, or people in general, can get attached to a service like this that causes so much harm. It really is a horrible combination of loneliness, easy accessibility, and eventual addiction that leads to situations like this. I apologize for the long comment but this is a passionate topic for me and something I believe needs to be talked about more. Thank you for making this video and spreading awareness, and may Sewell rest in peace. <3
youtube AI Harm Incident 2025-07-20T19:5… ♥ 143
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxGG5WmMI9PRZ5o1Bt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx2e5peCMYVDoU5nsV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6R8F1ZKVNm7NX_rh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugza9X2MOmxu8zKlW7J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwSjZ5cvmbWHYIcgk54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwyOuUZqIpW_XT6td54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwcf1k2rSzyKeJnQkd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyEk9DtQz-tVukgyp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgztoKGASLbQqJ4zMeB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMxjeEDEn4Q4oRE354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]