Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Seems important to note that Nate (and the CEOs) are hardly, hardly alone in warning about AI existential risk. Something like 80% of AI researchers across the board give it a 5% or greater chance of happening, including 2/3 of the godfathers of AI Geoff Hinton and Yoshua Bengio, respectively a Nobel laureate and most-cited scientist of all time
youtube AI Moral Status 2025-10-30T20:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxi_WQDxjBUM3DxMXV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyubHlk3SYTc5ECco14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"ytc_UgwIP0X6C2Uh3Db8qat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwXGNnOa3vEPzKtm814AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"}, {"id":"ytc_UgzxM-ZpKZHHmQchi7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxdZC77W8Sk51DN1hl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz0EPssorPnG-CUiWx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyL_J9maIR2t9Q5PMp4AaABAg","responsibility":"expert","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzxZWeZ9v_2i70bTUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzP2ObOZA0ZLAXZoTB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]