Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Someone might even train an AI model on exactly the type of watermarks major sites use, given time That one seems silly. Legal issues of targeting something that would primarily be used for piracy. All that would likely happen is they would switch to dynamic somewhat random watermark generation. They could play a cat and mouse game, but it's not like most watermarks can't be removed anyway. Or, well, just paid for. Seems like a lot of work for the intersection of people who don't want to pay for something but also really want images off a stock site instead of from, say, anywhere else on the internet where images can be stolen without that worry.
reddit AI Harm Incident 1742219913.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mi7bms5","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"rdc_mi6jnal","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_mi9d5tn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_mi66u5w","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_mi72k51","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]