Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The trolly car problem. No decision would be made by robot as this would interfe…
ytc_UgwMUIXYp…
G
I presented the freeze hypo to ~10 LLMs… All said they would not harm human, exc…
ytc_UgziKsYoY…
G
Personally, I think that people should approach AI tools differently than they d…
ytc_UgxPDqMxp…
G
"all AI"
"big plug"
"kick out of the wall"
You're an infant trying to defeat a u…
ytr_UgxQmcMaP…
G
It’s very tough, but I can see clip number two happening in real life, but I jus…
ytc_UgxHSLlHX…
G
I think AI art is great to come up with concepts but where will this go? Books? …
ytc_UgxnMMlkh…
G
It's like colouring books. When I colour in a colouring page, I'm being super cr…
ytr_UgxDR7iN2…
G
AI = Destroyer of human being employment..once 70%human being lost their job.. …
ytc_Ugz3Kd-MC…
Comment
A critically important essay written by Dario. A few quick thoughts on it:
The human/gorilla analogy is often cited, but the essay compares AI to human psychology and perhaps the better analogy is how more intelligent humans treat less intelligent humans, and why. Some relevant thinkers on this are Foucault, Rawls, and Habermas.
However, the problem is harder than this because the analogy no longer works well for vastly more intelligent AI. The institutional checks we have for other humans are rough cognitive parity, physical vulnerability, need for cooperation with other humans, information lumits and mortality.
A superintelligent AI has none of these limitations. Its advantages over humans, and therefore the level of risk, is not only intelligence, but the combination of intelligence, agency and independence from human cooperation. We are already beginning to see increases in all three of these dimensions.
youtube
2026-01-29T06:0…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwJjVPPxRKLk_EhFuV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoQNZaYdLgmJSUNyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugw2PlaLm0IcC3ThBNN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzLR4Kb7lW-vqQ4VFB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyU7KI1Pz1XtRuQfB94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyoj4NRJNDUbL0GlPV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxnT51F-tccieTZSrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxAyewDdmXOKWAb8CZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwR2AH_xdmySHHX_nl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzaUD67bAVjkmcXGz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]