Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That hasnt been true in 20 years. Tech bros are business majors now and ignore a…
rdc_ljrhhjv
G
AI art for silly memes or for fun is fine....
Using AI art, then claiming you ar…
ytc_Ugxz6Rzcv…
G
They seems to forget we all start with stick figures and potato faces. It takes …
ytc_Ugw51FRmW…
G
AI is just as likely to make the elites totally pointless. With AI grifters at t…
ytc_Ugww5oSH7…
G
So when people use pre-made brushes in photoshop, that's not art either huh
All…
ytc_Ugxvlu2n0…
G
Aww. So cute.
Except the fact that the prompts are fake. He specifically told t…
ytc_UgzQM5XQe…
G
ChatGPT told me that Sweden wasn’t apart of NATO like two months after we joined…
ytc_Ugxw5UVEl…
G
1-Man creates AI
2-AI perfects AI and enslaves man
3-Solar flair wipes out elect…
ytc_UgwZsBUFt…
Comment
Isn’t the logical extension of your argument that we should not only stop AI development in the US, but that we should also go to war with China and Russia to stop them from AI development?!
You’re handwaving the arms race idea a little too easily. If AGI is as much of an existential threat as you believe it is, stopping our own AI development is no where near enough to combat this potential extinction event. China will not stop. Russia will not stop. They both believe in whatever power a won AI arms race would give.
So then, you agree we should not only stop AI development in the US, but also start World War 3 to stop China and Russia from reaching AGI, right?
youtube
AI Governance
2025-08-26T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz3qTS819wIgZshvBl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWwj-vUslVBQdFn354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz4anTgjdsGbSssSZJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaHX4mvwUBpBLGR8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIpceCfSqOdxPm3mx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]