Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
HRMs and AI derived from HRM will take over down the road. LLMs will become his…
ytc_Ugwfm6jTC…
G
LOL AI influencers will ruin the everyone you are speaking about. There's alread…
ytr_UgyGtRq1h…
G
Alarming intelligence is here. We created it. Our politicians are not protecting…
ytc_UgxTeE2Oh…
G
Check out "Foundation" and the robot who is running it all - R. Giskard Reventlo…
ytc_Ugwm56BkQ…
G
Within 10 years, we will have F-bots answering to digital Pimps, standing on th…
ytc_Ugz3D-34-…
G
Definitely want more mass shootings than a system can catch criminals. All 90 mi…
ytc_Ugyhye77a…
G
why does every gen ai have the sparkles icon?? i never click a sparkles icon on …
ytc_Ugx9SMGBz…
G
What real difference would it make to have external indicator to show that car i…
ytr_UgzQPKe63…
Comment
The pharmaceutical comparison you made is the most chilling part. If we treat AI like a drug that cannot be recalled from the bloodstream of an enterprise, the duty of disclosure shifts from marketing fluff to a rigorous stress-test of the model's absolute failure ceiling. We are moving from a world of Model Cards to a world of Black Box Warnings. If you can't kill the process remotely, the liability shouldn't disappear; it should just front-load onto the safety alignment phase with massive punitive stakes.
The real legal precedent here will be whether lack of control is viewed as a technical limitation or a negligent design choice. If you build a product that is inherently uncontrollable, "I couldn't stop it" sounds less like a defense and more like a confession.
reddit
Viral AI Reaction
1777023132.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohyyv9k","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_ohzmxky","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ohyzyxr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_ohzd9v3","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_ohzjtke","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]