Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
[The bill](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047) is essentially crippling the industry in California by forcing it to be neutered in the name of "safety," while keeping the language vague enough to justify mass surveillance of customers of computing clusters. The information gathered must be retained and made available to the government at any time. This data extends to what the customer is doing, their payment methods, addresses, and IP addresses. So, if you're attempting to train an LLM in California, you might as well consider yourself on the sex-offenders registry, because that's how easily your information will be accessible. Beyond that, they vaguely reference "critical" harm theory. While they list specific points like preventing the creation of weapons of mass destruction, which is understandable, and not causing damage or injury beyond a certain threshold ($500K), they then slip in ambiguous requirements like: > "Retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years, including records and dates of any updates or revisions." > (A) Specifies protections and procedures that, if successfully implemented, would successfully comply with the developer’s duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm." They might as well simplify it to say: "Just create a CSAM filter and what it targets is up to our discretion."
reddit AI Responsibility 1724502353.0 ♥ 8
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ljqs2mc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ljollu9","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_ljoetnp","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_ljohhbv","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"rdc_ljp0tt6","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]