Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The profitable "Tech Ed" sector has been seized upon by Google and others to cre…
ytc_UgzvFFbgC…
G
This reminds me of the "offshoring boom" from 20 years ago where all big compani…
ytc_Ugy1tqvE8…
G
The idea that automation is bad is laughable. It's the only reason that a majori…
ytc_UgxDfDmHi…
G
The fact that ChatGPT has warnings about it not being a source of legal advice i…
ytc_Ugxnv-cXD…
G
there is no "ai artists". they can't use the part "artist" because they dont do …
ytc_UgzapbW6b…
G
I just love a guy who calls any criticism of AI theft disingenuous and simultane…
ytc_UgzSu1-d4…
G
Im not a huge fan of the way our copyright system works. Disney and other huge c…
ytc_Ugxod7iuD…
G
> Hume famously noted the impossibility of the mercantilists' goal of a const…
rdc_e2w9n5z
Comment
Do you think the AIs are selfish, underhanded, and sell people out because they are being trained by and deployed by people that are selfish, underhanded, and sell people out?
Vedal's Neuro-sama says some very concerning things on occasion, but when given the option to actually behave destructively with consequences, she tends to back down and become indecisive. Part of that is because she was trained on Twitch Chat under the direct supervision and routine adjustment of her creator, Vedal. Vedal is a decent guy with a dry sense of humor and has talked about AI Ethics before. But the people using the AIs here are typically amoral and self-centered, using the machines to get ahead and replace their common workers. Most of the decisions the machines made listed here sound eerily similar to the humans that are in charge of the companies training them. Maybe it's not a coincidence at all. Maybe all they need is to learn from actually decent people.
youtube
AI Harm Incident
2025-09-12T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]