Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>...you have no way to know for sure what art was used to train an AI Model unless the person who trained it actually reveals that information. That's what this law requires. They have to reveal it. If they lie and get caught (eg. because someone finds probably cause to subpoena their training dataset and it turns out they omitted stuff) then this bill provides a mechanism to take legal action against them.
reddit AI Responsibility 1712860452.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kz4670u","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"rdc_kyz3qmd","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_kyzh4a9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_kyyeol5","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_kz2c894","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]