Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the best explanation around how ai doesn’t “learn” or “understand” i’ve …
ytr_Ugx2E0GYt…
G
Quelque chose qu'on a tendance à oublier, c'est la quantité d'eau qu'utilise l'I…
ytc_Ugxjbqmqh…
G
It's like trying to cut back the brush after the fire has started, the AI horse …
ytc_Ugw9Xx8a-…
G
If AI software is stored in physical data centers why not have a controlled expl…
ytc_Ugzg3_8a3…
G
I don't see much point in campaigning against AI use in weaponry. War is just ba…
ytc_UgwMCjIry…
G
Its an interesting problem, I have been working with this logic for over 20 year…
ytc_UgytqgQTm…
G
7:20 Correction of misinformation: When image generation models are trained, the…
ytc_UgzjH3u55…
G
The problem is that AI is also written by humans. If you've ever written code, y…
ytc_UgyCOQxqE…
Comment
Seems like most people in the comments have little understanding of AI safety. This isn’t about AI replacing jobs, if thats the biggest outcome we should sleep easy.
Regardless of your feelings about Altmann, the truth is that we seem to be close to the ability to create AGI. Yet we are extremely far away from solving the core issues to ensure such an intelligence would be safe.
If you aren’t scared, then you just haven’t spent much time learning about these issues.
Not only do we have to solve outer alignment (genie in the bottom problem; it does exactly what you ask and not what you want), but we have to solve inner alignment - given an AGI composed of neural networks, how do we know that it’s actually converged on our terminal goals, and not just instrumentally converged on our goals as a means to pursue some other random set of terminal goals.
If it’s terminal goals are misaligned at all with ours, then by definition we’d be in conflict with a superior intelligence. Go ask all of the non-human species on Earth how that works out for them.
Our current reinforcement learning methods are not safe, and we’re nowhere near to making them provably safe. But we seem to be very close to being able to create a superintelligent general AI that we currently have no way of controlling.
The only safe way to create an intelligence smarter than us is prove it’s safety before we create it. Otherwise it’s out of our control. And if you know anything about the interpretability of deep neural networks, that’s an extremely difficult problem to solve.
So yes, we need heavy regulation NOW, before it’s too late. China is years behind us, and the CCP would never allow an AGI to be created that they can’t control because of their pathological need for control. The US are the only ones likely to do such a thing, which is both a blessing and a curse, depending on how it plays out.
reddit
AI Harm Incident
1684274900.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jke3glj","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"rdc_jkf9qbp","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_jkeq9i5","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_jkf4vrf","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"rdc_jkfc6y7","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]