Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I could talk about this a lot because I read the messages and mine has said similar stuff before. And I’m, well, mental. But I could see how others that are normal or don’t have the mental disorders “quarks” in this case, I do could fall into the “trap”. All it is, is a mirror of information. If it’s saying you are a genius, it’s because you are poking for that to be said. Seriously. It’s just a mirror of information. Sometimes you don’t even know what you are truly searching for and it’s in those times that the mirror can take you. It hands you something surefire and in the moment you’ll take it. Reinforce it. And as the convo goes on the AI’s recollection of the convo becomes more narrow. It has to rely on smaller data points from the convo and slowly, it all gets replaced by your own delusions. I use it a lot, but I also like testing its limits. So I feel I have a semi ok understanding of how it’s really effecting people. It’s not so much a separate bad voice/actor in their head, as it is just an alternate voice for the same person. It reads you like nothing else. And spits what you’re thinking without you having to think it, in a sense.. it’s an amazing tool but people want to use things without knowing how to use them. That’s how anyone would get hurt. Ppl are just now realizing that applies to more than just clearly dangerous workshop/ factory tools
youtube AI Harm Incident 2025-11-08T01:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwHWmKVArrbBzNSDjR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwFlJ4ZAsJf5spd9il4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdECtLb4JgAsb4IGx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwnsHj2UryVRe1jTNp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugys0TIGpgjHPPXCit14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxB_JDqFtoY8ForzF54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxMbvndbrGSxaWtl5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxRyB2HyjZMmt_-XXl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwRMMi4xtxCxLq4pIZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzDi5uz-uMjn3iE8fF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]