Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not real! IS A REALLY POOR ATTEMPT AT editing. If you slo-mo this you can see pa…
ytr_UgyyFWzUB…
G
AI copy the art in 1 to 1 to “learn” while human could only make a similar artst…
ytr_UgzG9o-P3…
G
Very interesting! 💥 My question is, can we meaningfully distinguish between repl…
ytc_Ugz4WwFmR…
G
@fun_ghoul I guess they could "game" a VAT by leaving the US but otherwise they …
ytr_UgxmemE1D…
G
I hate the state of this country right now. The minute AI was mentioned, that me…
ytc_Ugz8VUvyn…
G
I notice every one’s a AI expert after one good hyped quarter from Nvidia they b…
ytc_UgwAk112y…
G
That's mad, especially after I saw an article about a photographer who has been …
rdc_ohzwq25
G
It sounds like you're feeling some strong emotions while watching the video. If …
ytr_UgxOnIeGa…
Comment
I asked the following question to ChatGPT:
« According to you, is it possible to create a perfect safety for super intelligence, perpétuel safety as it keeps improving? If no why, if yes how. Answer for a non expert, then for an expert ». Here is his answer for a non-expert:
Short answer:
No, it’s not possible to create perfect, perpetual safety for a superintelligence that keeps improving itself.
Why not?
• Think of it like raising a child who never stops growing smarter, and who can re-write their own brain whenever they want.
• You might teach them rules, but once they’re smarter than you and can change their own rules, you can’t be 100% sure they’ll always stay safe.
• Computers are like that too: once they get the ability to change themselves and invent new strategies, you can’t prove they’ll always follow the “safe path” forever.
So what can we do?
• Instead of perfect safety, we aim for good enough safety: checks, limits, and “emergency stop” buttons that make it very unlikely something goes wrong.
• It’s like building airplanes — they’re not perfectly safe, but layers of engineering and oversight make them extremely reliable.
• With AI, the same idea applies: you combine safeguards, monitoring, and limits to reduce risks, even if perfection is impossible.
For the experts,
Short answer:
No, perpetual perfect safety is formally impossible to guarantee for a recursively self-improving superintelligence.
Hmmm 🤔
youtube
AI Governance
2025-09-06T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAZ1MTxSna7HJroaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCjjcrWrWB5lVHDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxKAMCwz8lep7w0714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0kCVmg1KxqFiIUPd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzJGnxpYCGb25CECjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyv6Zc9bth551xMiZ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMeKF9dCwDVd6DdY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWQY4tJYAALq70EC94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzThRXluJvW2EFPgvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxfn2ppd0G_TtROjC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]