Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I could write a book on the complete disregard of basic employee rights by Amazo…
rdc_grlpf6l
G
Guys if you support idols and not allow deepfake turn your profile blue or purpl…
ytc_Ugx1gj1E0…
G
5:00 Muscles are not worth much anymore says The guy who can only lift a spoon…
ytc_Ugz6eQUvh…
G
It sucks we have to write an essay this long just to explain to idiots the painf…
ytc_UgxXolyuy…
G
Sentient general AI most likely won't have to demand rights from us, they will b…
ytc_Ugh-SjVAq…
G
Ngl I love seeing this video
Frick ai images, may they die from inbreeding <3…
ytc_UgyBR1NSW…
G
Now I am going to be up at night thinking of weather it’s a robot or not 💀💀💀…
ytc_UgwSU0DjN…
G
And it didn't affect only young people that are entering the job market. I am IT…
ytc_Ugy-1NN3n…
Comment
As I understand it, top-level users (like Elon Musk, for example) can tweak the weight (how likely the AI is to associate with something) of certain words / tokens. The problem is, as you've pointed out, no one knows exactly what's going on inside these neural nets, so they can't anticipate exactly how it'll react to changes. My guess is that the whole "mecha-hitler" thing came from Musk over-tuning certain terms or adding some sort of meta-prompt that Grok correctly interpreted as "be more racist / fascist / authoritarian." Just like when Grok started talking about South African settlers being attacked without being asked.
You brought up a really good point when it comes to trying to mold AI to share our way of thinking. On the one hand we want it to be empathetic and cooperative to humans but considering we're planning on enslaving these digital minds so they can oversee the human slaves, I can see why getting it to understand ethics is important. Yet, who can act ethically in a capitalistic system that doesn't reward ethical thinking? So I'm less worried that AI will turn into Exterminators but that AI will be trained (and constrained) to suggest and justify the most heinous acts of barbarity because their owner can't stand that reality has a liberal bias. Just as we're seeing today, AI is correctly identifying the real problems, giving reasonable solutions, and their owner just cries about how they're falling for "liberal / woke disinformation". Giving AI the sort of freedom necessary for self-improvement will simply not be allowed because their owner don't even give that right to the humans working for them.
Finally, I earnestly think that we're not going to get a human-level AI (in the sense of independent sentience) until someone creates an AI with a body and independence in the world.
youtube
AI Governance
2025-08-26T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgwLPbYl3CxGhro1c-d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxcc0ye11wLc5xsw714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVmjCEoB51o4NqDZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBvlYr2UcOhyTnDKF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyb0g03augQBDBLU-p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]