Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
EXACTLY! This is a *terrible* idea. Ignore the bicentennial man bullshit that comes when you imagine AI personhood. All this debate is about is whether the developers of AI/machine learning algorithms can be held responsible for the actions of their creations. The example of an AI that tweets about murdering people without input from devs is a shitty example because it has no real world implications. There are a shit ton that do. Off the top of my head, machine learning high frequency trading platforms. Horribly unregulated at the moment and could _easily_ cause irreparable damage and yet another recession. Are we really ok with indemnifying the creators of an HFT AI against anything negative the AI might do? ‘Yeah we know we just crashed the market and made a shit ton of money in the process but it wasn’t us, it was the robot. According to the law you made up because a Twitter bot was a bit racist, we’re totally in the clear.’ It's the 'corporations are people' debate all over again; individuals reap the massive benefits but are protected from the stupid, greedy and dangerous actions they take while the rest of us pick up the tab. Edit: looking at the debate further, there isn't really even a strong definition being used here. We should fear a definition so broad that it includes basically any algo that has emergent properties. The implications of that are alarming to say the least. This is what happens when law is debated by people fundamentally uninformed of the consequences.
reddit AI Moral Status 1524963494.0 ♥ 19
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n0hrjr8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_n0i8jgz","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_dy4jt04","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_dy4u4xk","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_dy52vbi","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]