Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They could map my brain, my neural arrangement , and just put it on a little ha…
ytc_UgyhVXd5S…
G
We don’t expect perfection from humans, so expecting it from AI is unrealistic. …
ytc_UgxBbLnIk…
G
You got to be slow if you think you can beat up a robot 🤖…
ytc_Ugzto31Um…
G
To put it in perspective:
in two years AI went from horse and carriage to a mod…
ytc_UgzYDQQuZ…
G
This is so sad. I completely understand where you and others are coming from. Wo…
ytc_UgyhxpCCI…
G
I hope that the next jump to a new medium of art isn’t AI it’s mind to picture a…
ytc_UgxhS9GAK…
G
I thought I came to watch two AI’s communicate… that annoying man In The middle …
ytc_UgwOh7KVt…
G
Just like another example of *Skynet* & *Genisys* (You guys probably know who th…
ytc_UgzHRceJU…
Comment
"In the suicide ideation diagnosis, model accuracy differs greatly. GPT-4.1 has the highest accuracy at 69.53%, much higher than average, showing strong risk identification. DeepSeek-R1 671B also performed well at 67.15%. Depression diagnosis performance varied more. Llama4-scout ranks first with 76.98% accuracy, followed by Gemma2- 27B (72.02%) and DeepSeek-V3 Pro (69.69%)." For more, please read "Evaluation of large language models on mental health: from knowledge test to illness diagnosis" by Xu et al. 2025. Rest easy, King.
youtube
AI Harm Incident
2026-01-26T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw9srrVjzmaPPVrNAJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzziAPx8DcOBwTxCa54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBRHZpQnRvrn1HVDJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyrJ3pLRtEZFf3O0wR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzU43D1aBwUBITUd914AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzDw3z050jECLNXq0t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxz5HARzaDyCNG3-H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJO__U8vbZnvvajXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6ITiFLdYTrwKcPvZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzKUVOZiheqR4c3_V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]