Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
mixed all beers and flies in one bottle. Warehouse automation “robots” are dumb …
ytc_Ugy9Uxsjv…
G
Shouldnt he have brass knuckles on atleast i mean the robot dont got gloves and …
ytc_Ugzmbx0lv…
G
Cancel it or make it illegal before it's too late. i just cannot find any intent…
ytc_Ugxhl0Rn_…
G
That’s interesting because I asked ChatGPT about eight months ago to choose a na…
ytc_UgzCIDsvn…
G
Before you dive too deep into what this guy is saying, read The Two Faces of Tom…
ytc_Ugx8CCGmt…
G
2 things:
1: AI is doing a great job
2. Men don't look for the details as like …
ytc_UgyX8--oD…
G
The autopilot doesn't break if you hold the accelerator. It says that on the scr…
ytc_Ugw066AAp…
G
I think one of the few good things AI did was to make the pretentious and arduou…
ytr_Ugyl71Sd2…
Comment
Concerns about states' rights and preemptive laws aside (bc I don't really know much in that field), these "ridiculous examples" for bans they want to prevent actually make sense to me.
Why wouldn't you want to allow banning "misinformation"? Because a ban means giving the state the power to discern when the crime has been committed. Therefore, in this case of misinformation laws, the state would become the arbiter of what is truth and what is false. Should we want to give the state that power? Do we actually trust the state not to use this power to start declaring uncomfortable truths to be fake and start censoring citizens/ai? Do we trust our political opponents not to do that next time they win an election? I don't really want to be one of those people who constantly invoke '1984', but in this case, it seems appropriate.
And why wouldn't you want to allow state regulation in the first place? Because anything that is supposed to be used across a wider area will have to comply with all the active laws in the area at once. So if you want a product to be used all across the U.S. you better make it compliant not only with federal law but also with 50 different systems of state law. In other words, on every topic, the program would always have to comply with the strictest rule in existence between 51 legal frameworks (and even more if you also want to enter the global market). In that case, even if the legal framing of one individual state is completely reasonable, combine it with the other systems, and you get something that is impossibly strict.
You don't get this problem with building and business codes since they are usually planned individually. You don't get this problem with production companies since they usually start off supplying a small area and can find solutions for legal system discrepancies as they slowly expand, up and including using different recipes in different factories supplying different states. Try doing that for software products. Try telling your 10 person dev team to develop and maintain 12 different versions of the program at the same time, just so you can sell it internationally. It's not gonna happen. What will happen is that the product owner will sit down with the legal team, consider what minimal version of the program would be universally legal, and have that developed. This is the reason why european digital censorship laws affect social media users in the U.S. and vice versa. This is also the reason why other kinds of mass products are heavily restrictive beyond what local laws call for. The more legal systems are added to this mix, the more unreasonable things get, both for the users but especially the companies who need to balance it all. It is perfectly reasonable to try and avoid that by giving sole authority about this type of regulation to the federal government.
youtube
AI Governance
2025-07-01T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwC9zb4GR3A3yJyj314AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz2feqmB1Kl8lpKP_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX7pLX9NPswdUBbOd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzpKOMnmCwirdcJeXt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyx3s9StbB-kjtXIGh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz1kiLdkTnmsUqq3Mh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8_2MCfGMebWMqRG14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPQWCiQXoydpOro4V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwJoDiuzEx1IdepfUh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyI8G4dOluPIT4XSPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]