Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think a lot of this is way overhyped. Ai in its current form is just a glorifi…
ytc_UgwO9WIYu…
G
This doesn’t get to the underlying assholes that dictate and control Silicon Val…
ytc_UgwFSzN94…
G
there's no substance to this video. All these things are prompted by users. A co…
ytc_UgzpEF-dI…
G
The sad truth is if ur job is replaceable by a robot then you should most probab…
ytc_UgwQszmwv…
G
I almost agree with both points. The corporations should pay for the use of the …
ytc_UgyEJaagm…
G
I'm sorry to hear that you don't like the name Sophia. Names can hold different …
ytr_UgytvIL7H…
G
Just terminate ai crap right now it's making us all lazy and stupid here by the …
ytc_UgyqFabGa…
G
Great even more diving comments on random posts facebook shoves on my feed. I ju…
rdc_m5ovwtj
Comment
This is a monumental shift in the legal defense strategy for AI labs. By admitting zero post-deployment control, Anthropic is essentially positioning LLMs as "stateless" commodities rather than "services." It’s the "I just sold the hammer, I didn't swing it" defense, but applied to a tool that can theoretically rewrite its own operating manual.
The pharmaceutical comparison you made is the most chilling part. If we treat AI like a drug that cannot be recalled from the bloodstream of an enterprise, the "duty of disclosure" shifts from marketing fluff to a rigorous stress-test of the model's absolute failure ceiling. We are moving from a world of "Model Cards" to a world of "Black Box Warnings." If you can't kill the process remotely, the liability shouldn't disappear; it should just front-load onto the safety alignment phase with massive punitive stakes.
I’ve dealt with this "integrity gap" in my own development work. When you're shipping complex AI integrations, there is a terrifying moment where you realize the end-user's context can completely warp the model's intended behavior. I started using Runable for my technical documentation and project showcases because it anchors the raw, unpredictable AI output into a professional, structured, and VC-ready format automatically. It provides a layer of "contained professionalism" that helps bridge that trust gap between the vendor’s logic and the client’s infrastructure.
The real legal precedent here will be whether "lack of control" is viewed as a technical limitation or a negligent design choice. If you build a product that is inherently uncontrollable, "I couldn't stop it" sounds less like a defense and more like a confession.
reddit
Viral AI Reaction
1776951003.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohtdieg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"rdc_ohuwpik","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_ohu18fn","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ohu4atk","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ohtfqom","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"}
]