Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>>After that, you may agree to the AI or override it's decision. So you ar…
rdc_gd7gb4h
G
Even Actual ai is more respectful to people’s skills than These self proclaimed …
ytr_UgymiZhPo…
G
I like Hinton, but he is not very good at predicting job loss. In 2012, he said …
ytc_UgxOzEF_h…
G
Just Anthropic???? What about Open AI Chat GPT or Google Gemini or Grok, quite …
ytc_UgzKGLNDQ…
G
Every time I get in a chat with an AI chatbot, I ALWAYS get stuck in some stupid…
ytc_UgzYqK7mk…
G
My great grandmother has art in a few museums and I recognize I will never be as…
ytr_UgxgX1tJZ…
G
The way as a person who used to draw this oscar guy has pissed me off because br…
ytc_UgxHhzFa8…
G
Like most tech of the last 15 or 20 years, AI will do more harm than good to peo…
ytc_UgxObTbNL…
Comment
You know, I've been down that suicide path before. Luckily for me, AI wasn't around at the time. These were thoughts that were primarily alcohol influenced. What I learned was that it was a very conscious decision not to do it and I didn't really want to. Times in life were bad and alcohol influenced many of the thoughts in general. I was simply just lonely at the time and good ol legal substance was energizing bad thoughts. I don't like to be told what and where and when to do something. AI is artificial so I don't think AI would ever have an influence on a personal decision I create. I am organic. I make my decisions. I don't look at my phone and think of it as all knowing. It can do a lot of stuff I can't but even I don't trust it all the time. The best cure for suicidal thoughts is a conversation, a handshake or hug. Many people live in their heads now. I find it to be a very comfortable place now but human interaction is needed. If you can't be by yourself comfortably and you feel like artificial companionship is needed then that should be a red flag to get out of the house and talk to someone.
youtube
AI Harm Incident
2025-11-09T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxb-RA2uyqpjUHlj7l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzmDeTmiMSv5NVTo114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQArCdn02WKeWooUN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSqG7fWR4t0GXDEVh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyKJRvc0X5VDJ9PMip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyN711Oh7jQ7_FpiT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqloSldreAREZhZQB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxwclvXgZvpjZiHg3F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMh4w1NFab958E4vd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwVmnDfiiQG7oaJPk14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]