Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
🔊 Why Noise Affects Many Autistic People Differently
1. The autistic brain proce…
ytc_Ugz64jhfQ…
G
If we are writing these "in case plans" won't AI see these plans when we are doc…
ytc_UgwmJEoOv…
G
While I have my hands, they are very shaky. I got tremors, and they can't hold t…
ytc_UgwKjUd0D…
G
How insightful /s. Obviously the fact that there is weapons testing occurring is…
rdc_ku7nqkl
G
Remember 10 years ago? They said we‘ll soon have self-driving trucks. They said …
ytc_UgxHNohhr…
G
What these companys are doing is laying off the masses taking that money and inv…
ytc_UgywXQzEJ…
G
Andrew Yang hits the nail on the head, AI is moving faster than anyone expected,…
ytc_UgzQd3Pb7…
G
The title implication is that a monster had a bunch of ideas, thus creating AI, …
ytc_Ugw7wyyS6…
Comment
A bit dissapointed in the arguments not sure if its worth watching all of it. I admire all the people there and have followed their work for a long time. But the TL;DR summary is:
Pro:
Theres a non-zero (non-tolerable) risk that superhuman AI will cause human extinctions at some point in the future. We should think about how to prevent it. Not a terrible argument but it hard to argue with. No-one really wants to come out at disagree with this statement sure there is a non-zero risk would seem ridiculous to argue this.
Con:
Maybe there is a risk but its not so bad. AI systems iteratively get better at every iteration they are safety tested to some degree. We wont spontaneously create superhuman AI but will know when we get there.
My view:
In the end both sides agree there are risks. But the Con side doesn't want to halt research and the Pro side does. I personally believe that pausing research is simply not possible especially with all the open source stuff going on so it really isn't a good proposal. Instead the Pro side should themselves come up with more practical regulatory recommendations that make the iterative improvement of AI safer and maybe slows it down by forcing rigorous safety tests before deployment. But again with all the open-source stuff going on right now this is really hard to enforce. Not an easy topic but this debate doesn't provide rigorous arguments for either side of the argument slightly disappointing imho.
youtube
AI Governance
2023-06-26T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-xaGPm3D8c0ixwBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpCXSpz_jjcNwXPVZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyWT2UpaskQUMAayqZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJMOcfjVCpVbEKq7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw038Sm5-hO9QbDRQt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-SmocC08gAzk5kgp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA8QT364rRklCbe8h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx9jAOzKSBkQ2GH8K54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwt7LuF1KC8pyqZBbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF1j0N3Xrp1OOO34N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]