Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So what are you one of those that cry when something is ai when literally the hu…
ytc_Ugx4LKo-j…
G
I think humans demise at the hands of robots in the future will start as soon as…
ytc_Ugycmr7xA…
G
Yea im tryin to figure out if I use it I'll be selling my soul. A lot of things …
ytc_UgzmlK9rE…
G
After I spend around a day trying to get AI to generate me a semi-passable wallp…
ytc_UgxA0HY_a…
G
i've said it and i'll say it again
majority of artists hates the unethical pract…
ytc_UgwnTIrWO…
G
Initial Q&A was fun.
I am hopeful AI will generate ( for a fee ) term papers …
ytc_UgzrOMDH0…
G
I don’t even blame students for using AI to write papers. As someone who graduat…
ytc_UgwITCn9k…
G
In the series "Better than us" the bot tried to keep the children safe no matter…
ytc_Ugz8kK7S8…
Comment
Here are some of the logical fallacies that Dr. Farquaad himself made:
1. Argument from Personal Incredulity: Claims AI safety is impossible because he (and his Farquaad friends), after 20 years (but sometimes it takes 50+ years - a fallacy within a fallacy) cannot conceive solutions.
2. Equivocation: Labels AI dev "unethical experimentation" by broadly defining "human subjects," bypassing consent. In other words, we’re too dumb to consent.
3. Ad Hominem: insults you and me, again, by calling us "NPCs" in a simulation. He assumes he is right without proving the simulation and that consciousness is emergent, not fundamental; thus dismissing Hoffman/Castrop’s science - because he thinks they are NPCs
These people, him included, are technocrat Farquaad vicars. “Just believe us, you're too dumb, we’ll save you! Give us more money, power, and control to protect you from the monster we’re creating, that we also want you to fall in love with. It’s inevitable. If we don’t make it someone else will, it might as well be us.”
But who knows, the T800 ended up saving humanity.
youtube
AI Governance
2025-09-04T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx_bV1jwLAjuNilkOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxeb0e3BsIISpa6Qr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"confusion"},
{"id":"ytc_UgwTDdEgXsZ7_fOv1OV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgztFGr4QwQqe2QA7kR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfMH21s_XWjLyY2Sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWStSA1qosnBpGQvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjkcyiXxGHu13gCAt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx79llVT16gbB0P6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxsGLDsfs5jZMktyDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJXvUbV2lGnUpDP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]