Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's truly offensive that the AI sect has to ask jn every conversation if AI is …
ytc_Ugx06d51h…
G
I think we don't have to much to worry about in the near future because I'm pess…
rdc_kskmzq8
G
This is a ridiculously easy road for a self-driving car. A single line road with…
ytc_UgwrgWaZ7…
G
This man and the others are the reasons humanity will not be around much longer …
ytc_Ugwzeug3z…
G
With some smart prompt writing, I got it to write this story:
Once upon a time, …
ytr_UgwK2ddga…
G
I know this is supposed to be funny but I think this is insensitive. As a medica…
ytc_UgyedlgCE…
G
It sure seems to think. When I ask ChatGPT of Gemini a multifaceted question, it…
ytc_UgzVlCSEP…
G
Megan movie shows it. And there was a guy type of robot movie too, I forgot the …
ytc_UgxT9wSLE…
Comment
Stuart Russell is a legend, but several claims here don’t line up with what’s actually happening in AI safety, governance, and deployment today. Quick fact checks:
Fact Check 1: “AI companies ignore safety.”
Not accurate. Frontier labs now require red-team evaluations, bio/cyber risk testing, alignment tuning, and model release gates. Safety system cards and independent audits are standard parts of deployment. This infrastructure didn’t exist a few years ago, but it’s the norm now.
Fact Check 2: “We have no idea how these systems work.”
Exaggerated. We don’t have full understanding, but mechanistic interpretability has mapped real circuits for reasoning, planning, behaviors, and steering vectors. The “total black box” framing is outdated.
Fact Check 3: “There’s no real regulation.”
Incorrect. The EU AI Act is now law. The US has federal reporting and testing via the AI Safety Institute and NIST frameworks. The UK, US, and others run independent frontier-model evaluations. China has strict generative-AI rules. This is an active governance environment, not a vacuum.
Fact Check 4: “Models already show self-preservation and willingness to harm.”
Misleading. These behaviors show up in fictional or engineered prompts, not real agentic drives. Current models don’t have persistent goals; they mirror patterns in training data. Safety teams patch these with alignment methods and targeted tuning.
Fact Check 5: “We’re heading toward guaranteed recursive self-improvement.”
Overstated. Some research uses models to improve training data or tune parts of the pipeline, but no deployed system can autonomously rewrite itself, scale its own compute, or redeploy new versions without humans. Labs explicitly track and avoid risks of autonomous replication or self-modification.
Fact Check 6: “Experts think there’s a 25 percent chance of extinction.”
Misrepresented. That number comes from a speculative survey and a few individual quotes, not a consensus. Actual risk work focuses on misuse, cybersecurity, bio-threat access, persuasion risk, and societal impacts—not extinction scenarios.
Fact Check 7: “China is unregulated and forcing a dangerous race.”
False. China already requires security assessments, watermarking, and model controls for generative AI. The “China is wild west” framing is outdated.
Fact Check 8: Ignoring the open safety ecosystem.
Thousands of researchers contribute to open eval suites, threat modeling tools, red-team utilities, and safety benchmarks. Safety work is not limited to big labs.
Bottom Line:
Russell is influential, but a lot of his framing is anchored in older AGI thought experiments rather than the current reality of safety engineering, risk evaluation, policy, and oversight. The picture today is far more structured and grounded than the doomsday narrative suggests.
youtube
AI Governance
2025-12-05T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyP-RMbqNx5sTmHuhl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzw06UBNArUKeUSEYp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5jZuN1vcHEFu5-wx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKODpSqVtVIJ3zVXx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyvvu40hb2pq9qSWPx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHqOuqTJQKJmR8dgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy4hZl4CmewmQptvuZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFDwqM-A8Gb5rqqzF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxmjOr-hvUUFg2CHsl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyK9GvfHlg8p3wMlKd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}
]