Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Arguably Yudkowsky isn't even true doomer either. He still argues that AI alignment is possible, and is generally quite optimistic to it, including the idea that if we get genetically engineered "super Von Neumanns" then they would be able to solve the problem. A true doomer would be someone like Roman Yampolsky, who argues that alignment is fundamentally impossible to solve.
youtube AI Governance 2025-12-17T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzuWR6hx3CLOgZSRHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzyvIR6rXwNVQnA2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxiios944ZuN3g8eAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzCi0TxLKPytjDpdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxmeK49D5Ozq6FvWh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKRCq3umJT5r9sECF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy51DyZeLyuYHFB4Kl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfTOKX_9T64lZSiOh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy0ZfWcS6ZVrAODROV4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzdx3YKn8WEt0s-d2t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]