Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm seeing people struggling to address the same problem with AI as we do with other people. As a child, I was deathly concerned about just how much framing affects the human mind. It can often sabotage performance for better access to moral grey areas. I see the same problem with AI. Alarmingly similar, in that we haven't really put together the groundwork to understand what an AI is doing. The fields that it is drawing from/engaging with. The orientation of it's 'thought process' That ignorance is what scares me. Trying to do that level of oversight for a human is an extreme challenge for modern institutions, and that's before you get to what it would do to privacy. But the arbitration of privacy is inextricably part of the problem. The amount of data required to store that level of performance would grow exponentially. Systemically we can be more efficient, but that means being more selective, which means involving the 'human' side of alignment, on top of whatever tools the AI is actually using. It is a bit of a paradox, but it's a functional one. one that produces real data... or at least hallucinations. have we figure out how to recover from hallucinations? If not, we have madness.
youtube 2025-11-05T16:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8_uHO4Cs3b2axydt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVN8R5dKU3AKe-S1t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQJTDQLvhifr0rp_J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8jeYWWWTgOGrAZ8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkMdS6VNvw0U4rHZR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwKBazwp5LSbQmM77p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxtctHLBicI2ZpzR5N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzuS8PC0tec57H8GNp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyF_p9sQip8zf-qK6F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugwc0IRmD6Ist-r1b2R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"} ]