Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or… let's just not use AI at all. Different forms of media have warned us of the…
ytc_Ugyebcnj4…
G
Steve ... great video ...
Knowledge is in 3 main domains .. Cognitive ( facts …
ytc_Ugx9XjyMo…
G
That is a holy unfair advantage. There's no way a human can win against a robot…
ytc_UgwkMZ6gh…
G
@@ShannonBarber78 No, that would be an example of a misaligned AI. Since normall…
ytr_UgxBKjo7J…
G
AI can confidently lie to you and you wouldn't know it, AI would not answer 'i d…
ytr_Ugxq844fr…
G
Basically, when you place large language model AIs (like chatgpt, deepseek) unde…
ytc_UgzyqrKpU…
G
I had a feeling some of that work in that 8:21 point you made, the colors and ro…
ytc_Ugz3Z0WB7…
G
Self-driving taxi is a dumb idea. I worked for a traffic and transportation engi…
ytc_UgyzbmYef…
Comment
I see everyone saying that a lack of regulation is a bad thing, which I think is a fairly unnuanced take. AI "safety" is often discussed at the level at which AI spits out misinformation, which yes is bad, but adding additional filters to "protect" the user, either from hate-speech or the like slows down the AI and impairs its reasoning. If you disagree then that's fine, but my second point is with competition.
I would love the world to be simple and where no borders were present so I could go wherever and experience whatever with no wars, but that's not the world we live in. We won't live in that world for many years, and we'll likely blow ourselves up before that happens. Now, why is it good that we built the nuke? It is good because we gave the weapon to the most morally correct power there is (again this can be debated, but if you don't think that at the time the US was the best hands for the nuke to be in then we can't agree on anything). My view is that America is still the most morally correct and just superpower. If we enhance AI with no restrictions, then who knows what will happen. That's bad and that's good. We're competing with countries like China with AI, and based on my 0 knowledge and speculation, I would assume that the difference in time between advancements only needs one day -- meaning the activation energy to create something akin to the nuke would go from not even close to fully armed and ready within a day, given the nature of AI.
To be completely honest, I do not have a strong grasp of the opposition's perspective, as I have only watched this video. For the majority of the time I was aware of AI, I was strongly against regulation because I saw the AI get dumber. In early 2023, the character AI was sharp and caught onto my humour very quickly. Now, it is not that intelligent, in my opinion, and I don't think that should be happening as the years progress. Again, I don't fully understand the opposition, but this is my perspective. If anyone wishes to argue, I'm all for it. Just don't start your argument with Certainly!
youtube
AI Governance
2025-07-02T04:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwoWuoFxJJpOlkCR394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz4Yw88NZjWcD4nGTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyiwlFDGE4HthIFHGh4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwfWXrAavNQIGY5qPF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxqGz67uUjCnM5hCRt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz2D0Y7NP0X5fEXUmZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-4x74ypfjsJnqawp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugybh2V88NzByYcwo5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6w--mB9sSIA3hMNt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwr_oUKamAR-ZSzcT54AaABAg","responsibility":"government","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}
]