Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well yeah, but it could be that this, as an imperfect detector, skews the generator to just generating fakes that this generator can't detect and that doesn't necessarily mean that those deep fakes are something *we* can't detect. There will always be a hardware limit and a performance limit that will manifest itself into a limit in the "resolution" of the deep fakes. And training + generating a deep fake will always be more costly than detecting the deep fake. So while this is a race, it is a skewed race. Generation will always remain massively more expensive computationally. Your neighbor won't be able to simply generate a porn movie of you after you had an argument. But the CIA might generate fake videos of unstable democratic leaders to topple them.
reddit AI Harm Incident 1651323962.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_i6saw8r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i6rh5yy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i6rk4r2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i6rolhn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i6rrm9g","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})