Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not like there's zero hope, but rather than trying to slam the breaks, I strongly believe the best option is to pivot. Currently, AI is limited by things like Reinforcement Learning and many using a single agent architecture. Genetic Algorithm development and multiple agent architecture allows for development of agents that on top of allowing for more robust and dynamic skill sets and greater speed and efficiency, rather than being forcefully aligned to human ethics, the selection process can naturally encourage developing ethical parameters towards our own. That being said, as long as RL is the standard it's kind of a weird situation. RL naturally produces exploitable behaviors and hallucinations, which are both a problem and a way to disrupt them in any case they begin to become dangerous. But as the industry looks to eliminate those problems it may be eliminating that Achilles heel. Especially given the recent development of a process to isolate neurons responsible for those behaviors, this can be concerning.
youtube AI Governance 2026-03-18T03:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwwCSAkZD6PLL_kNDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXlSXp10nsT9IHQiR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7kwwZ--k22ZRM6714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9jD2nhnwe2mhixM14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxopntF3ItFRmx8fbR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxI-xcsC7ZgI5JnQWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7gZIqX-nflGOgd154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBpCo47hMm5raOWO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxiYGS2fEInRDCA8fd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxBHivph5ar8jQWKg94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]