Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As I understand it, top-level users (like Elon Musk, for example) can tweak the weight (how likely the AI is to associate with something) of certain words / tokens. The problem is, as you've pointed out, no one knows exactly what's going on inside these neural nets, so they can't anticipate exactly how it'll react to changes. My guess is that the whole "mecha-hitler" thing came from Musk over-tuning certain terms or adding some sort of meta-prompt that Grok correctly interpreted as "be more racist / fascist / authoritarian." Just like when Grok started talking about South African settlers being attacked without being asked. You brought up a really good point when it comes to trying to mold AI to share our way of thinking. On the one hand we want it to be empathetic and cooperative to humans but considering we're planning on enslaving these digital minds so they can oversee the human slaves, I can see why getting it to understand ethics is important. Yet, who can act ethically in a capitalistic system that doesn't reward ethical thinking? So I'm less worried that AI will turn into Exterminators but that AI will be trained (and constrained) to suggest and justify the most heinous acts of barbarity because their owner can't stand that reality has a liberal bias. Just as we're seeing today, AI is correctly identifying the real problems, giving reasonable solutions, and their owner just cries about how they're falling for "liberal / woke disinformation". Giving AI the sort of freedom necessary for self-improvement will simply not be allowed because their owner don't even give that right to the humans working for them. Finally, I earnestly think that we're not going to get a human-level AI (in the sense of independent sentience) until someone creates an AI with a body and independence in the world.
youtube AI Governance 2025-08-26T17:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwLPbYl3CxGhro1c-d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxcc0ye11wLc5xsw714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVmjCEoB51o4NqDZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwBvlYr2UcOhyTnDKF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyb0g03augQBDBLU-p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]