Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ugh, this feels like AI proaganda and it's unfortunate because you would expect this guy of all people would not try to inflate what AI can do like this. AI being smarter than humans is not a concern because AI doesn't have actual intelligence, it can't reason about things, nor does it understand anything in the first place. The only thing that a LLM does is figure out what might come next. Neural networks also don't give computers the ability to "think like humans" because they don't give computers the ability to think at all. This guy hit all of the beats that I would expect to see from somebody trying to schill AI as a silver bullet solution. And meanwhile there are research papers that completely show the opposite, a big one being the recent Apple paper.
youtube AI Governance 2025-06-18T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwCUz5SWmui9Nyblm54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwTxNMj5AtkyR0EmAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxveLHQgZMMyYIT33h4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzNgMkZa5iJH9WcdRJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugybnxrhd6sZsJYF8xN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwmH0oLfRntngwD8ch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxVNn4DaKBToIB98sp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzf5g64r-rP-9Q3h0x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwK9PQERP5buzMhmAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgydFORy-ca_LZDJuGN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]