Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is wild 😳🔥 The way the AI answered feels both simple and unsettling at the …
ytc_Ugx99mjk-…
G
I see AI prompting as more of substitute for commissioning artist, so people tha…
ytc_Ugy5S4IDr…
G
By law, a driver is responsible at all times whether or not safety systems or or…
ytc_UgynxNiFw…
G
Achha ji ? 😂😂😂 ek web3 platform bana k dikau mern stack pe full front end back e…
ytc_UgxCF9JFQ…
G
lol who pays these people to blatantly lie, yea ai will replace a massive amount…
ytc_UgzptO6Qk…
G
It's not just "competing" with China. It's being able to advance AI enough to pr…
ytc_Ugz-om2P6…
G
Did anyone hear about the AI that convinced a guy to commit suicide to help clim…
ytc_Ugxn2-AzR…
G
the government won't let it down cuz these AI tools will help track all the peop…
ytr_Ugy2ie-up…
Comment
Ugh, this feels like AI proaganda and it's unfortunate because you would expect this guy of all people would not try to inflate what AI can do like this. AI being smarter than humans is not a concern because AI doesn't have actual intelligence, it can't reason about things, nor does it understand anything in the first place. The only thing that a LLM does is figure out what might come next. Neural networks also don't give computers the ability to "think like humans" because they don't give computers the ability to think at all.
This guy hit all of the beats that I would expect to see from somebody trying to schill AI as a silver bullet solution. And meanwhile there are research papers that completely show the opposite, a big one being the recent Apple paper.
youtube
AI Governance
2025-06-18T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwCUz5SWmui9Nyblm54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTxNMj5AtkyR0EmAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxveLHQgZMMyYIT33h4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNgMkZa5iJH9WcdRJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugybnxrhd6sZsJYF8xN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmH0oLfRntngwD8ch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVNn4DaKBToIB98sp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzf5g64r-rP-9Q3h0x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwK9PQERP5buzMhmAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgydFORy-ca_LZDJuGN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]