Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:25:00 ... mmm... if the ai is gonna be that smart... o.O?... will not want to …
ytc_UgzZY5SdA…
G
I'll tell you a secret...
AI doesn't produce anything.
AI is just a production …
ytc_Ugy-upBKE…
G
I don't know what is scarier, the impending AI apocalypse or Eliezer Yudkowsky's…
ytc_UgxTTocvI…
G
In 2-5 years? That's improbable. I'm guessing this man has never ran against any…
ytc_UgyV4LA1X…
G
That's a common reaction! Sophia often surprises us with her insights. If you en…
ytr_UgwBvw5tU…
G
Hmm, I'm starting to see more instances and indications of something I thought w…
ytc_Ugwctq5ua…
G
99% jobless then there is no need for ai because there will be no one spending m…
ytc_UgymcYao2…
G
Yippie! Hooray. The AI bros just suck, they arent creative. They have the intell…
ytc_Ugx7b4GZW…
Comment
[The bill](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047) is essentially crippling the industry in California by forcing it to be neutered in the name of "safety," while keeping the language vague enough to justify mass surveillance of customers of computing clusters. The information gathered must be retained and made available to the government at any time. This data extends to what the customer is doing, their payment methods, addresses, and IP addresses. So, if you're attempting to train an LLM in California, you might as well consider yourself on the sex-offenders registry, because that's how easily your information will be accessible. Beyond that, they vaguely reference "critical" harm theory. While they list specific points like preventing the creation of weapons of mass destruction, which is understandable, and not causing damage or injury beyond a certain threshold ($500K), they then slip in ambiguous requirements like:
> "Retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years, including records and dates of any updates or revisions."
> (A) Specifies protections and procedures that, if successfully implemented, would successfully comply with the developer’s duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm."
They might as well simplify it to say: "Just create a CSAM filter and what it targets is up to our discretion."
reddit
AI Responsibility
1724502353.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ljqs2mc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_ljollu9","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_ljoetnp","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ljohhbv","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"rdc_ljp0tt6","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]