Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is interesting... but I think we as humans are underestimating the scope an…
ytc_Ugxao9NwH…
G
I have never had a positive experience with a companies Customer Chat Bot....EVE…
ytc_UgzEprHzf…
G
Not necessarily. Also your analogy doesn't really work here. The elevator is lim…
ytr_Ugwwa-Pb-…
G
@seneca983 What does this have to do with auto-pilot??? The idiot driver respons…
ytr_Ugz05N2k2…
G
The govt "oversight" in control of AI is the scariest thing of all. They will ul…
ytc_Ugz-P1Ycs…
G
"Deep learning = using neural networks" is not correct. The vast majority of neu…
ytc_UgzvT8jVK…
G
Counselor jobs are not safe from AI. GPT can play the role already. For example,…
ytc_Ugz-dL_b-…
G
2:12 ai definitely talks better than XQC so we should just get rid of the guy. I…
ytc_Ugz6Yd3qr…
Comment
Conveniently ignores the difference in the error _rate_ between humans and "AI" while trying to just be like "they're the same." They're fundamentally different. LLMs are just autocomplete with a large repository of words. They don't think. They don't have intent. They lie as easily as they tell the truth, and have no idea when they even "try" to do either.
Stop being apologists for a technology that is somehow making everything worse (quality of work, cost of energy and hardware, mental health, critical thinking, job availability due to delusional CEOs, environment, education degradation, etc).
This is a technology that we ought to be boycotting and making no excuses for. Whatever benefits you think they have are either overhyped or not worth the costs.
youtube
AI Moral Status
2026-03-09T14:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5L1jwKo0bPxcInER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwU3dIa2hekvGNyLxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxStcDnh3T07An_bll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyhEauhTlJIarKoad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxCqT0zCc80ws0TosZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwTH9MZeutrtaoMAY54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzm-FnPFbOF8szIHah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcDXS1BiBFO4AP-354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxXmID-09-pstwqjCl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxELedGIPn1Gx1KpbN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]