Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a lot of uncertainty - but we have to assign credences on the best of our knowledge and understanding in order to make tradeoffs. We meed to assign credences to the likelihood that a PauseAI to would happen, if a PauseAI would be effective, the likelihood that AI would naturally doom us with current levels of control/motivation, the likelihood that AI may naturally become more moral than us/superfriendly with current levels of control/motivation, the likelihood that humans go extinct in non-AI ways, the likelihood that humanity without the aid of AI will result in a dystopia and vice versa... And have these credences feed that into the question of which of these factors has the most impact/traction, which have the lions share of attention, which are neglected etc - which are most likely solvable/doable
youtube 2025-09-28T09:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzoxp8VJkk06Myu70B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz-xwhi8HAkjSr1NGt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyjEtUvpCDNyVE7zlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy3GdP0VdQ-BrpkX994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynLkqbQPtw1CCwMCF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxwS-kthOu_-zflWfl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxv4YlcLl2ASYEppU54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxHqxjExpOIg_4OWuN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxpcIdPNA2qBPtylnl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwAPjNz8TAzw3hFldd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]