Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
28:13 I think the allure of AI is that it’s not going to run to an authority or report you. Mandated reporters are wonderful and it’s great that people want to help people in a very dangerous situation but a lot of times in our lives, we’re at a place where we’re not sure what would be the right thing for us and we’re scared about what other people will say or do if we say blah blah blah and there’s a keyword in there that’s supposed to set off an intervention. What would be great is if we had actual human advisors who we could go to and we could say hey there’s this person and they’ve been doing these things and I feel uncomfortable about it but at the same time I’m not sure if I’m just being overly sensitive or if I’m thinking too much of it . And they could say yes or no you know give them some actual advice. Or I could be like, hey, I don’t feel like my life is so great and I kind of feel like I want to delete myself. They could be like OK well, I’m confidential. I can’t do anything about it but what I can do is I can tell you there are different options there’s do you know that there’s online counseling available. That’s pretty cheap? Did you know that there’s this after group? I think you would really get something out of it. Here’s a book. But no what we have now is oh my God you’re thinking about this red sirens. Call an ambulance. Get him to the hospital right now. Call their mother call her grandmother. Call the reporters. Everyone’s gonna know. Even if we don’t tell them, they’re gonna know because people aren’t stupid. You know we need people that we can actually go to that we can trust aren’t going to run behind our backs and set things in motion before we’re ready.
youtube AI Harm Incident 2025-12-11T13:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxj80H7MIapl8UWxUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwHebWvJ3Y2O-o1IeV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyfTL6I6PsyS5i7O5N4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgycQwiyJwFAS_h8yeR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeKWyDC7xe45ep4ZV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxH0RtZgt2h9XLk4tp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxh4pnL77eKZKgkxs54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzcUpMg9B4O8oTPy-Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz89iQ4VYUzN013Bqd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxHXQwxzG0IQvvPnNx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]