Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Worth noting that Russell’s center (CHAI at Berkeley) is funded by Open Philanthropy — that’s Dustin Moskovitz’s money (Facebook co-founder). Same network that funds Future of Life Institute. These guys have a direct line to DC, Moskovitz literally met with Biden about AI regulation. So when Russell pushes for government oversight, he’s not some neutral academic — he’s part of an ecosystem that benefits from that outcome. More regulation = more need for “experts” to advise government = more grants and influence for people like him. Also find it ironic that if you actually believe AI is an extinction-level risk, centralizing control through government is the worst possible architecture. You’d want redundancy, multiple competing approaches, decentralized development — fault tolerance. Not a single regulatory body that becomes a point of failure. The market with many players experimenting with safety is more robust than betting everything on bureaucrats getting it right. He wrote the textbook, trained half these researchers, watched them get rich at OpenAI and DeepMind while he stayed in academia. Now he’s the guy warning how dangerous their work is. Maybe he’s right about the risks. But his solution conveniently puts people like him in charge.
youtube AI Governance 2025-12-07T22:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwvTSRD5trZevfIYnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQutbG2tNNGIK3sHF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz59csBDz9waO9xi4l4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy1awct-k9UerWIvdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMGKwHmKTWLYY2cZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxhEd0gncx35A3RDw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwNpigjrWr-iFxl4wJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzIoICz2ds5OYh4MKp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxuRG2TM7bgEB2_wu14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyTZZrvhLxnWRxq7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]