Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The use of Badiou here is creative but ultimately convincing. To be fair, Badiou is as vague on how his philosophy applies to science as he is voluminous about how it applies to politics, so good try--this is necessary work, inasmuch as it is necessary to understand Badiou. Badiou's idea of the "ethics of truth" as faithfulness to a norm-shattering "Event" is similar to Kuhn's concept of a paradigm shift, that is, a conceptual innovation that fundamentally restructures the entire system of concepts, assumptions, and interpretations of data that constitute "normal science" (ex. the way Einstein's theory of relativity destroyed the consensus ether-based physics). Badiou's ethics of truth is not about committing oneself to a positive goal or telos (such as the development of AI) but being faithful to elaborating/enacting/unraveling the consequences of a discovery or radical conceptual innovation that has already occurred. You don't see the Event coming, but once it does come, you stay true to it--that is the "ethics of truth." So yeah, he does have this kind of badass "never look back" attitude, but it doesn't quite fit with this idea of it being "unethical not to develop AI," more like "if we were to develop AI, we'd have an ethical obligation to go with it." But the thing is, Badiou's conception of human infinitude ("immortality," "exceed[ing] my being") doesn't sit well with the premise that it is possible to develop genuine artificial intelligence with digital computing. (Badiou doesn't even consider most of what we call thought to be thought, just mere animal cognition; "thought" is a label he reserves for the intellectual elaboration of "truth.") And what Badiou's ethics of truth as it applies to politics certainly opposes is the mindless accumulation and engineering of technologies of domination, which is all our efforts at making "artificial intelligence" so far amount to.
youtube 2015-02-22T04:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugg6_c_fnxJFiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjBrm-BO4E1Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggwq5VL_P9YvngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugi28m3CG46xzHgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggmA4p100IU0HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgiqEwaXkqSM-ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugi0PpcKcA8VCXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgghsB3quoCVXHgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjzbO8DgHLWlngCoAEC","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiclBN6LTRIL3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"} ]