Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Faith in Jesus Christ is the only hope we have in this life with AI, without AI,…
ytc_Ugz8yE828…
G
Society is rapidly heading toward a point where this won't matter anymore. Just …
rdc_k7l468w
G
I truly hope that when Ai surpasses us it recognizes that humans have so much po…
ytc_UgzFm3Gvf…
G
Funny hes talking about ai taking over our power grids etc and weve only just ha…
ytc_UgzhXf3p6…
G
Bull Shit! 99%… what is A.I. doing then? Not making goods and services… no one c…
ytc_Ugyl5Q8L5…
G
The biggest mistake was not demanding to see a warrant.. suspicion is not probab…
ytc_Ugzx8yLzL…
G
Manufacturers need consumers. Boycott ANY products from companies who fire human…
ytc_Ugz_S9hwy…
G
not sure I'd be hanging around with a robot holding a machine gun , made from th…
ytc_UgytJH7UQ…
Comment
Worth noting that Russell’s center (CHAI at Berkeley) is funded by Open Philanthropy — that’s Dustin Moskovitz’s money (Facebook co-founder). Same network that funds Future of Life Institute. These guys have a direct line to DC, Moskovitz literally met with Biden about AI regulation. So when Russell pushes for government oversight, he’s not some neutral academic — he’s part of an ecosystem that benefits from that outcome. More regulation = more need for “experts” to advise government = more grants and influence for people like him.
Also find it ironic that if you actually believe AI is an extinction-level risk, centralizing control through government is the worst possible architecture. You’d want redundancy, multiple competing approaches, decentralized development — fault tolerance. Not a single regulatory body that becomes a point of failure. The market with many players experimenting with safety is more robust than betting everything on bureaucrats getting it right.
He wrote the textbook, trained half these researchers, watched them get rich at OpenAI and DeepMind while he stayed in academia. Now he’s the guy warning how dangerous their work is. Maybe he’s right about the risks. But his solution conveniently puts people like him in charge.
youtube
AI Governance
2025-12-07T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvTSRD5trZevfIYnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQutbG2tNNGIK3sHF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz59csBDz9waO9xi4l4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy1awct-k9UerWIvdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMGKwHmKTWLYY2cZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxhEd0gncx35A3RDw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwNpigjrWr-iFxl4wJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzIoICz2ds5OYh4MKp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuRG2TM7bgEB2_wu14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwyTZZrvhLxnWRxq7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]