Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is going to take the ruling class to the human slave camps cause we are all n…
ytc_UgwfYaLC2…
G
Over regulating deep fakes? Are they seriously saying making deep fakes of other…
ytc_UgygyZNpc…
G
How can we say it's not conscious? If consciousness is the act of being aware an…
ytc_Ugz1DHwi-…
G
"hi chatgpt can you show me a hand so i can try and replicate it"
...
"What the…
ytc_UgznqVEg_…
G
So the cool thing is that self-driving cars are already a thing! And it's crazy …
ytc_UgytkOGpc…
G
Actually, it seems clear that the working class or "blue collar" workers (that i…
ytc_UgwDQak5_…
G
You forgot the most important detail. This only works if ai is cheap. But it’s n…
ytc_UgzCHBayn…
G
Commenting before watching, because this is an evergreen relevant comment about …
ytc_Ugxz6f_Ki…
Comment
@DOne-ci1jg - FWIW I think 80 % of what he said in this session actually resonates with the field and was unusually sensible, but the reasons for it are different and as you go into the specifics, it will deviate. E.g. it would be a serious concern to have him as the expert involved in fleshing out the legislation.
An example of this is his quote, "..should spend more of our effort into making trustworthy and safe AI rather than just making a bigger version of something we already know to be unreliable".
By this he is referring to stop working on deep-learning methods. Which contrary to his predictions are behind basically all of AI revolutions since like eight years back.
He has made several claims about how these algorithms are too simple and they will not be able to have this or that capability, which have been resoundly refuted several times. Many of these statements have been rather resolutely and confidently stated without an arguments beyond what he thinks should be possible based on some philosophical reasons, which is perhaps why they have been wrong so often.
He prefers more traditional unmodern AI techniques and his background there is in neuroscience. It just doesn't reflect the field.
There have also been many discussions where he has shown himself not understanding the modern techniques.
I find it rather astounding that they would invite him as the expert when most of the field will tell you he's not and there are so many better picks. In fact this is for what he is famous.
Now a lot of the concerns about AI safety are valid, but it is rather clear that his interest in it is because he thinks his methods are the alternative that offer a solution. Few share that view.
So I think some of the intuitions he expresses are fair and good but I would take it with a lot of salt, not mistake what sounds good for what is well considered, and recognize that many of his views are controversial, and if you wanted to pick a reliable expert, he would not be on the list.
I am very happy that they did invite someone who could bring this perspective though and not another Montgomery, but I question this being a well-researched pick.
So I'm not sure what you had in mind about who to follow - what are you interested in more specifically? How the algorithms work, how they can be applied, or AI safety?
youtube
AI Governance
2023-05-16T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxCjMgyRERZJIkf8xZ4AaABAg.9pnEEvJXGCz9ppLdG82g68","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz7uwC_XdVYigT_wMN4AaABAg.9pnCNopi2FA9prnBkY3yt6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzHDvZhbtDqDqGE9994AaABAg.9pn8v-yF-1O9pw-tfinM3p","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzHDvZhbtDqDqGE9994AaABAg.9pn8v-yF-1O9pwkb4wncvY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw664Rx60xutHn03-t4AaABAg.9pn7EnsV1uU9poyLPNYwab","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx7ziFqJsBPmaKCYoV4AaABAg.9pn1pBSkPYh9pnIMmjWqLX","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugw1lGgAunYPeXO364Z4AaABAg.9pmxRaW3Tdk9pnHQ8-VsbD","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw1lGgAunYPeXO364Z4AaABAg.9pmxRaW3Tdk9pnLWRnnPqt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugxz8_9G92PzqsTLWCx4AaABAg.9pmnSIUISGq9pn43JReBAX","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytr_Ugxz8_9G92PzqsTLWCx4AaABAg.9pmnSIUISGq9pn7aIf3VCD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]