Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A critically important essay written by Dario. A few quick thoughts on it: The human/gorilla analogy is often cited, but the essay compares AI to human psychology and perhaps the better analogy is how more intelligent humans treat less intelligent humans, and why. Some relevant thinkers on this are Foucault, Rawls, and Habermas. However, the problem is harder than this because the analogy no longer works well for vastly more intelligent AI. The institutional checks we have for other humans are rough cognitive parity, physical vulnerability, need for cooperation with other humans, information lumits and mortality. A superintelligent AI has none of these limitations. Its advantages over humans, and therefore the level of risk, is not only intelligence, but the combination of intelligence, agency and independence from human cooperation. We are already beginning to see increases in all three of these dimensions.
youtube 2026-01-29T06:0… ♥ 13
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwJjVPPxRKLk_EhFuV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoQNZaYdLgmJSUNyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugw2PlaLm0IcC3ThBNN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzLR4Kb7lW-vqQ4VFB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyU7KI1Pz1XtRuQfB94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyoj4NRJNDUbL0GlPV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxnT51F-tccieTZSrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxAyewDdmXOKWAb8CZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwR2AH_xdmySHHX_nl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzaUD67bAVjkmcXGz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]