Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dude's response to the Fancy Autocomplete question really just confirmed to me that it's the right question to ask. Also, am I stupid, or is referring to what AI does as "predicting" completely wrong? Like, if it's "read" everything that humans have ever writen, it's not really "predicting" anything - it's just offering a viable answer based on that data. If you toss a coin and I say "the coin will either land on heads or tails", I haven't "predicted" anything, even though I have a 100% chance of being right. The AI doesn't _understand_ anything, it just has access to a huge amount of prior information. If one were to train an AI using only false information, I don't think that AI could ever deduce that we had done that. It wouldn't be able to infer that you should treat a patient with epinephrine if everything it's ever been taught says to treat them with cooking oil. Sounds contrived, but more generally, the AI is just good at putting out something that looks like the answer, based on past instances of answers. It can seem intelligent because, generally speaking, humans tend not to put out false information about important things, so the AI is unlikely to be fed false information about important things. I'm all for tech that can regurgitate curated information in response to questions in natural language. But I'm pretty sure no one would be willing to through whole economies behind that project. The tech industry has to promise something absurdly huge and powerful and scary in order to justify the ridiculous expense of running it.
youtube AI Moral Status 2025-10-31T18:2… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyuLx_n9Z55JJxfFdZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy-9l3p47Y3HD5zs5V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPNrdDRZiPWpfWqHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjdYfnsDQuw2Edxfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzG5Rr1x_jQ4oSWUrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKEgf6P7pZRCRYCEd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxfc_dAuv16pJqt3Fx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8-3TVxfY7fty90_B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwT7RJ1QqXIRXp3f8J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwAKvWCoXZdweSDSsx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]