Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Made this robot the for kill citizens in the future 🤔I don’t believe in governme…
ytc_Ugx7G1IOk…
G
What I find ironic is when Alex had on a guy that believes we should be wary of …
ytc_Ugwqx-TXW…
G
Pfft… invest in Nvda and AMD. You’ll be worth enough to build an underground bun…
ytc_Ugxzx8AV_…
G
You still beleive ai tools like chatgpt will be free forever ? In a few years th…
ytr_UgwwsZ0qF…
G
Sooo how can we tell that big tech building all these data centers isn't already…
ytc_UgxWGeKSH…
G
i think if we creat an ai that we integrate in us we would creat some far better…
ytc_UgwERgcFc…
G
AI BS. Amazon didn’t just fire and replace that many people with robots. A simpl…
ytc_UgyMGemOE…
G
Who owns the finished AI art? The person who described and requested the art pie…
ytc_UgxumAuiE…
Comment
Once again, a discussion of AI Safety and how we caould stop AI development that completely ignores the military imperatives. Even if you could get a globally approved freeze on AI development, there is absolutely no way that either the US Military or the Chinese Military would stop developping AI as it is now a matter of survival for them to be leading that race. And no, the USA dominating the World is not something anyone wants, other than Americans. It would be a disaster for the human race. Imagine having an orange psycopath with absolute power over the planet. No thanks.
youtube
AI Governance
2025-12-04T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx-aKGf0p_hQuyTf2d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFQKAPKOWDySBnKWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWc39WtXyHMVG4LQp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5Q3Lq3kuLnHEU_FF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9l5dOh12lHIs1VyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxOusnwRX4E_TVRt7B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxH7sOhjr5gdxAWi6x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw1ba_YmlEEUCc05Bd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzUiGiMmCWqK-W9MnJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz9RRUs9Alm9ExipbV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"resignation"}]