Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
16:00 It's not that people aren't admitting when they don't know. Keep in mind these models aren't just trained on a diet of forum posts and social media. They're trained to know things by feeding them real knowledge. But all that professional knowledge they're being fed exists because knowing it was the impetus to formally communicate and record it. (I know I'm just rephrasing what Nate explains moments later, but I'm trying to generalize more broadly in order to steer toward my point.) How do you teach liminal space to a machine whose sole conceptual existence is limited to consuming data and synthesizing inferred or derivative data? Ignorance in practice is the absence of data, and even as a defined concept is only a tiny bit of new data. Worse, it's by default not relevant if you didn't ask the machine to express ignorance. It's not an insurmountable problem, but makes perfect sense that it's proving to be one of the harder ones to solve. It's also just like a poorly defined traditional piece of software; when negative space is allowed to exist, undefined behavior emerges from it. It may be some comfort to realize the outcome of that is not in any way purposeful, no matter how bizarre or even harmful. The medical machine (Therac-25) that killed people because of a software bug massively multiplying the amount of radiation exposed isn't really all that different from one that was programmed with a memory hole in place of a number. AI is akin to the latter, and I think we should be slow to assign _agency_ to such outcomes. The latter example is also preventable with measures that already exist in our modern operating systems: programs aren't allowed to read memory they didn't allocate. Safeguards of a similar nature can also be devised for AI models. As with traditional software, the greater challenge will be coping with behavior that is _unexpected_ despite being well-defined. We haven't even got widely applicable strategies for coping with that in pure mathematics -- the most well-defined field of all. So with AI, just as with traditional software, we are going to continue being astonished by machines that only did what _we_ told them to do. But in every single other domain of (formal) human knowledge, including traditional software development, we are continuously getting better at accurately communicating our intentions and thus avoiding surprise. I've no reason to believe AI will be an exception.
youtube AI Moral Status 2025-10-30T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwQqxQpotEDfXAihYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAYICGUNLYsG4iY4t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxI_dehdpyV8pC13XB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxIxMcI7nMFQNvy2Y14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3g0ts866w4vJEXDh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyXDx6-Fp8g5M0EkH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2z7Y-c56R2EZoEDt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjLpclMnOB4psR4914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXA2z3Hs6VgESpiah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxTUBmnBVxL36pBFTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]