Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only time when AI becomes dangerous is when it becomes biological until then…
ytc_UgzI1pW-q…
G
OMFG CHAT GPT IS NOT ARTIFICIAL INTELLIGENCE!
IT IS A VIRTUAL INTELLIGENCE. IT …
ytc_UgwckDU1c…
G
But OPs point is the study specifically examined people who were never infected.…
rdc_g9t348v
G
Humanity needs to "grow up" for once or we as a species will be trully over and …
ytc_UgzafRWgF…
G
A very cynical and fallacious take regarding writers. America’s GINI coefficient…
ytr_UgypJi_QM…
G
AI is what degrades our creativity. Many don’t know how to calculate now because…
ytr_Ugy5GjbIi…
G
Firstly, I'm not defending big brother. I'm just pointing out how utterly meanin…
rdc_esqeiei
G
Is telling a machine to act and think like a human make a sapient being? We can …
ytc_Ugx-clDoo…
Comment
One of the biggest assumptions is that AI will be infallible enough or management will be lenient enough not to blacklist any particular AI software or even entire software development-teams whenever they start making multi-trillion dollar mistakes. That and managers would like to have a human scapegoat at every level so they can sack the human rather than the software.
youtube
Viral AI Reaction
2025-11-24T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugywf0zTfBkiDcqtslV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxay0gXX5fij0L9C-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJ-Gj55gJQkKBN_Gd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy9oQ8CGhjUxkpsP354AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHJfqg5p-H0CH83lV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzspZxMm7lkBQgZAZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxNZs6xonl_HbPacPR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybIBgDd9kRoBgTTXp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJp0jE3kveTfKocf94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz8cdRJrzOluPpYJQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]