Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the author has no idea how LLMs work and is wrong on so many levels. or knows it…
ytc_Ugx_4_lVl…
G
This is the plot of a legendary series I read called Arc of a Scythe where they …
ytc_UgwH_GcMW…
G
Im trying to get into the industry andwas finally working for a small indie game…
ytc_Ugw9I5Op6…
G
Wouldn't it be funny if AI is taught to be ethical in order to keep it from dest…
ytc_Ugz_1zmYf…
G
There are some good uses for generative ai, but making art is not one of them. K…
ytc_UgyR3Xh3T…
G
Super surprised you didn't touch on Google firing top executives within their AI…
ytc_Ugy9qyzZz…
G
@Bleyblader I'm not saying that China is anyone's enemy. My point is that wester…
ytr_UgwUWNXds…
G
Im happy to argue why capitalist are a worldwide pandemic that offer things like…
rdc_g9asdt7
Comment
I'm a huge Hinton fan (his academic work), but his recent stage does not seem very helpful. Yes, AI poses dangers, but what are we gonna do about it? Sit back, learn plumbing, call your local representative once day? I mean that's certainly not nothing, but why not offer an approach that leverages knowledge and research, instead of more or less idling around, waiting for doom to knock on your door?
Look at his student, Ilya, who quit OpenAI and moved on to start a company all about safe AI. If you're worried about your career, AI may not not take programming jobs for 20 more years. I do both AI and software engineering, but anyone that knows a little about code knows that current AI is very far from taking any jobs. And agentic frameworks look promising, but to me it looks like this in need of mostly human intervention, so progress there may be in the order of decades. I can only recommend to keep studying computer science, math, and understand the "threat" that is coming.
To me, this is a no-brainer. Plenty people worried about not having jobs, and plenty people worried about AI as a threat. Software engineering may no longer be a high paying job in the future, but it needs more people. And it especially needs people who *know* what they're doing, rather than going through their entire education by having AI do their chores, and them not learning anything beyond prompt engineering. The world will be chock full of such people, who got their degrees without effort, but also don't have skills beyond prompting.
And if you can't afford a degree, then just teach yourself. It's never been easier than now, where AI can teach it all to you. What matters is what you ask the right questions and pay attention, and never blindly trust anything the AI says.
If you're not willing to touch math with a ten foot pole, then you could still go into red-teaming AI. Learn how to hack it, make it your career to exposes its weaknesses. There will be no shortage of that in the future. You don't even need to be employed, there are bounty programs for ethical hacking.
The worst thing you can do is let yourself be depressed by the potential of a bad future that may not even happen. Remember that Hinton's position is not that AI is likely to be a detriment to humanity, but that this is a possibility that shouldn't be ignored. It may still turn out largely positive. How this goes is influenced by CEOs and politicians, sure, but certainly also in part by AI engineers and security experts of the future.
youtube
AI Governance
2025-06-16T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwZn61buOF891OqoLF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwA85AH6LOXkpK8pi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwp1j45RXTYZN6tRLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxaB2tJLJeq3c5h9054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx7HOtgYBL4UNpZWEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzlhO-VvrSow6v06p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyn2etzt7qK_g4RswF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugygp8YEajbXqZ8nEnJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgykiPsNLtHWVOGBN5p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaGiQkyIjrCuQrpJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]