Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A bit dissapointed in the arguments not sure if its worth watching all of it. I admire all the people there and have followed their work for a long time. But the TL;DR summary is: Pro: Theres a non-zero (non-tolerable) risk that superhuman AI will cause human extinctions at some point in the future. We should think about how to prevent it. Not a terrible argument but it hard to argue with. No-one really wants to come out at disagree with this statement sure there is a non-zero risk would seem ridiculous to argue this. Con: Maybe there is a risk but its not so bad. AI systems iteratively get better at every iteration they are safety tested to some degree. We wont spontaneously create superhuman AI but will know when we get there. My view: In the end both sides agree there are risks. But the Con side doesn't want to halt research and the Pro side does. I personally believe that pausing research is simply not possible especially with all the open source stuff going on so it really isn't a good proposal. Instead the Pro side should themselves come up with more practical regulatory recommendations that make the iterative improvement of AI safer and maybe slows it down by forcing rigorous safety tests before deployment. But again with all the open-source stuff going on right now this is really hard to enforce. Not an easy topic but this debate doesn't provide rigorous arguments for either side of the argument slightly disappointing imho.
youtube AI Governance 2023-06-26T09:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz-xaGPm3D8c0ixwBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwpCXSpz_jjcNwXPVZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyWT2UpaskQUMAayqZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJMOcfjVCpVbEKq7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw038Sm5-hO9QbDRQt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-SmocC08gAzk5kgp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzA8QT364rRklCbe8h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx9jAOzKSBkQ2GH8K54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwt7LuF1KC8pyqZBbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxF1j0N3Xrp1OOO34N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]