Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Im suggesting this because Yudkowski will be able to go more into why it is so difficult to ”align” AI so that it does not do us harm. A lot of people are not aware of the fact that AI are ”grown” rather than programmed (which Hinton briefly talked about here) and that our understanding of what goes on inside the models is very limited to say the least, and largely all of the investments are put into developing the capabilities of the models, and not into safety/interpretability research. This combined with the fact that we have to get it right on the first try or we will loose control, is why so many ai researchers are so sure humanity will not endure. And Yudkowski is also very good at explaining why it is likely that humanity would not survive an ASI. So.. please try to get him on!
youtube AI Moral Status 2026-03-01T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugyrxo3Yl8kUbsYG4Bt4AaABAg.ATpcStJUQzgATtKkcXJFNU","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzTsdWzjpP1aIkYqnl4AaABAg.ATpPKF2Z6dYATq0FuVZSuw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgykV9A0sK9ItLX3iGd4AaABAg.ATpLtrEnsnEATrldH11JXN","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugz18dD3F-IXAIaQNXl4AaABAg.ATpF3_f33hSATpFbOTh424","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyXYzPUZozLxG97TXR4AaABAg.ATp9_U3k_DIATpBGLUfglO","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgxMRYwqruPfwxkWczV4AaABAg.ATouIY48NNDATov3sqEOma","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugwm2sqg3pA5woKI2Rl4AaABAg.ATotqWnATYaAToxmt9KJAA","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxUCL6HuoPofkPOh4R4AaABAg.AToiWPlFQB9ATpVoIsE9hh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxUCL6HuoPofkPOh4R4AaABAg.AToiWPlFQB9ATpiVdYpND1","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyuWWL3XCh95xEgKmJ4AaABAg.AToh-HrL8HAATpuR7NRn41","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]