Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's a good point. There's two big legal issues (depending on your location). One is "where is the data going, and how is it secured" and the answer right now for most gen-AI is "wherever they want it to, and not very well", which is an instant no-go for any business with data protection responsibilities. The other is "can you guarantee the process you put together, integrating gen-AI with your other systems will work 100% of the time, and when one time out of a hundred or thousand it fucks up and delivers nonsense, who'll be responsible for the consequences". They're not necessarily unsolvable problems. In the former case dedicated onsite instances, or industry specific instances with security assurances would do the job. As for reliability... I suppose that's on the industry to figure out. But in the meantime, listen to your lawyers when they say they don't want you to get sued.
reddit AI Responsibility 1755595154.0 ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n9hzee8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"rdc_n9ig08d","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_n9ixia5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n9kka6l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n9jts9g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]