Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@OGPolaroid if you're referring to the training of neural networks, then with t…
ytr_UgwVVdVEo…
G
@icandostuff8046the thing is they scan all that art and compile it together, as …
ytr_UgzYsltYZ…
G
Nah, AI cant even write an email properly
I aint gonna be replaced anytime soon…
ytc_UgxC3MGiq…
G
It is not true that Ai is more dangerous than nuclear, in fact this is the door …
ytc_Ugw9YsjJb…
G
How is understanding different from computing? We start early life at a clear fr…
ytc_UgwdExlVB…
G
The most racist people in the world are black, you don't need an AI code to tell…
ytc_UgxDScHIj…
G
@Jon@JonathanLoganPDX 600 IQ AI with 500 PhDs of knowledge would be incredibly i…
ytr_Ugy2Fww68…
G
And all just to have Copilot, writing emails, or Meta creating some stupid filte…
ytc_UgzeByOUc…
Comment
What is really sad is the family is blaming an app for the actions of a mentally ill family member who trusted an app more than other people. I get they are in pain but their anger is misplaced.
Btw, I once tested openai to make it believe I was losing my mind, it immediately put up the guard rails and asked me to seek mental health assistance. Didnt encourage any delusions even when I clarified it was a test. Even sometimes when I am a bit stress in real life and type things out fo character, it has never encouraged any self harm.
youtube
AI Harm Incident
2025-11-12T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy7XfizYtkxZSCgWB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwXWTz5Fc9kK8AM8I54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxD7SRDYHpfjDaNOeJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvW4l2a8MHpwMFXCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxqCTTpH9fwEEr9s2d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDHL9konBxydKAIcZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlcxUtO88JWsnD-l14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwbb9_M-hZUMjMO7c14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyNjfYvwuEOlDf0JmF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIY7SGO4CXwum87nR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]