Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@angelicloli9381 It doesn't. AI generates completely new images. As Sam explaine…
ytr_UgxoNa13g…
G
While you all were accepting there AI and thought it was all goo you now see how…
ytr_UgzvGMR5v…
G
AI bubble... Snap-crakle-pop! This is PR to mitigate the pop. Cynical perspectiv…
ytc_UgwmfKn9O…
G
No rll I am always nice even to Siri I don’t care if she not ai…
ytc_UgwY9rKrU…
G
Using a chatbot and build an emotional relationship with it is pathetic and sad.…
ytc_Ugwx7wxX7…
G
It's genuinely sad to see how much hatred the common folk has towards AI when ob…
ytc_UgyOc2t6d…
G
People who care about Ai-made music, don't care for music anyway : they don't ca…
ytc_UgwY_8QeX…
G
Videogames have been using Generative AI for ages to make infinitely unique maps…
ytc_UgwwmDLXB…
Comment
Excellent interview, but Hinton seems to have a slight touch of MDS.
Perhaps the good professor should read up on why Musk funded Open AI originally - as a way to fight a lot of what Hinton has now realized could be an existential threat to humanity in unchecked/unregulated AI proliferation.
Additionally, when the CEO (one of his former students if I recall correctly) & co of Open AI went back on their charter to make the company private and for profit (the opposite of what they told Elon when he funded them), Musk then began designing his own AI inside Tesla with the same purpose for which he had originally funded Open AI. Yet Prof Hinton seems not to about any of this. 🤔
youtube
AI Governance
2025-06-24T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzggaTBHzHZbffaHBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8gYeWsViy1EkLlYl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3yJi8bMVXGtxoQLd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxioJ6OWIryOkvsEvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyip9cpna_ev3nrny14AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyVDTQ_Co8759GfV5J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwI1SJdGhu8RAb7j0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBVjtBRhd2mO__Zf54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUUyw7VGem5NU3G_V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBBtMaEtkBGk9zHrZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]