Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That would be really wonderful! And that definitely won't happen by default. You should look up the orthogonality thesis and instrumental convergence to learn more. Orthogonality means there is no such thing as a stupid end goal. Just stupid ways to get there. Any goal is compatible with any level of intelligence. Instrumental convergence means that no matter what your goal is, there are specific subgoals that are logically implied, including self-preservation and power-seeking. Both of these concepts were theorized by AI safety researchers, and later empirically validated in current AI systems. They are properties of goals, not properties of the specific AI architecture. I think it's possible in principle to align a superintelligent AI with the collective good of humanity, but no one on earth has any idea how to do that, and by default we just get a powerful machine that wants something weird that is bad for humans if carried out to the extreme. If you're interested in this topic, I highly recommend looking into AI Safety Info for more information. The actual scientific research on this topic is more valuable than a YouTube comment section back and forth.
youtube AI Responsibility 2025-05-21T21:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIP5toPAVf-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIPM8kuv1Ud","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwYyW2bpzuFuFpRbl94AaABAg.AIOnhJi3q4dAIP6ei-HXhJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugy_nk2EiHLvLd4sPht4AaABAg.AIOjp_O-TDKAIOqV1-x02s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIP5YgJCQxI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIRDZKS0Q5M","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIOxEYy0pzW","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIP95V2knev","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIPaJGgVDf9","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOr4b-GZjS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]