Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue here is you look at the Eric Schmidt talk at Stanford where he is advising AI engineers to instruct their AI's to copy and steal entire product lines and business models and let the lawyers fight it out down the line. The tech companies don't see the ability for AIs to break the law to make money as a problem, they view it as a feature. When one of the stated uses of the technology is to be a patsy to break the law on it's creators behalf you have to start looking at the intent behind it's creation as malicious. A more realistic analogy might be that of a bomb maker or a gun maker. We regulate such industries and expect at least some measure of vetting and control from the vendors and creators of such technologies. Why would AI be any different?
reddit AI Responsibility 1724490753.0 ♥ 14
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ljqs2mc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ljollu9","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_ljoetnp","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_ljohhbv","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"rdc_ljp0tt6","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]