Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not that smart and I can see the potential AI LEARNS FASTER THAN HUMANS, thu…
ytc_UgwRF5shM…
G
The progression curve of Ai technology reminds me of a Nuclear Fission chain rea…
ytc_UgyB2rgFd…
G
Wow. Calling the kettle black. Mainstream media has been agenda-driven for decad…
ytc_UgzEot48h…
G
that dude a AI wit a human body😂😂😂 his Neurolink chip got hacked by A.I…
ytc_UgyAWXYHD…
G
As a neural net developer I respect her, but she is downplaying the risks far to…
ytc_UgwHjVqpp…
G
I own several 3D printers--machines that are much smarter than your average toas…
ytc_UgjANcp3q…
G
Disabled artist here. After my injury I got to discover all the things I could n…
ytc_Ugzbi4sFK…
G
Beautiful piece! AI can't make everything, especially more unique drawings but i…
ytc_Ugz1bMkrR…
Comment
It's a false dichotomy to label people as either doomers or not worried about AI safety. We worry about the actual damage AI has caused and is causing now, not about some obscure hypothetical future apocalypse. We worry about people mistakenly thinking that AI is actually useful and putting them in customer service roles like in Air Canada where it promises support that does not exist. Or software engineering roles where they rewrite your git history, introduce technical debt, and create security vulnerabilities. Or mental health roles where they do more harm than good (see Shell Game episode 4 for a really stunning recent example). To say nothing of the damage it is causing to the environment or economy.
youtube
2026-02-12T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzb3TKWHYanHBOo1XJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQcN6VEoADDaK6kXN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyN8GQMiCXKk23hhAZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzb46JnkXhObNcyuQp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxKJ-EpbkybkOqP7PF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxnbgnqfjXC6QeZpGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWE8_RFFGuAuteEex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEQSEl3QLV4t1I4PZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHoCa2fvNU_oaHFOd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwm0w1sq3-uqs1YBxp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}
]