Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does/would AI care about the 'Human/Carbon' realm?
How does AI represent itself?…
ytc_UgwJTqERr…
G
flaw with niel's argument about continuously out innovating ai. Well here's the …
ytc_UgzE35ajw…
G
This is bs. Their will be new jobs. The same way alot of jobs have been automate…
ytc_UgyjeEdXi…
G
What an amazing interview. I bought Karen Hao’s book after seeing this, and that…
ytc_UgyO8c47c…
G
I love that america decided to use their population as test subjects.
AI CEO ad…
ytc_UgwhBSlPx…
G
What management “thinks” doesnt mean much; what matters is if there is enough pe…
rdc_kyz7vbe
G
us humans will always be better than ai, companies are just pushing "it's the fu…
ytc_UgxuysAMH…
G
Yes because that a good reason to stop AI. It’s like saying that we should have …
ytc_UgyUzajb_…
Comment
The guy in the white shirt is spot on. AI is like biology, which is the result of chemistry, which is the result of physics. So, why a few simple quarks and 4 laws, physics builds protons, neutrons and electrons. With those three, and the laws, chemistry builds all the complexity of matter in the universe, and biology is just a complex collection of chemistry. SO, you don't have to teach an AI to build nuclear weapons, you just have to give it a few logical starting blocks and tell it to find the optimal solution to win, and it will inevitably build nuclear weapons, just like we did.
youtube
AI Governance
2026-03-27T12:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx4inJEc7QbwZWMEkV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6-TqnTkgFBQCQA914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYywa2HJpFLc4UQkt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFcwT72IdF4PnMQfx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwAeorGgJ67NXPviat4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxoKajllodWwHLZekp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz3_ko7IldZSdR6eqF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy_iMPJLiyWm28hHOh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNAf5oaNdBufYuev94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyiiarIxlRWm_bGixV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]