Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a very flawed approach and not how those detectors work at all. Your approach could MAYBE answer the question: is this porn video of Emma Watson real or fake by essentially looking up all existing imagery of Emma Watson, but it could not confidently decide whether that leaked video of Putin surrendering that that whole Ukraine operation has been a complete disaster is real new material and even further it could not decide that for any imagery of people+places it has never seen before, like your ex-gf for which you deep faked incriminating videos. You essentially have to detect if any pixel distortion/manipulation has happened at all regardless of any source material, it's much more of a meta pixel manipulation analysis which should be fairly possible to distort and then you have to detect patterns of deliberate distortion of pixel manipulation and so on and so on
reddit AI Harm Incident 1651324641.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i6sc36c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i6s13ny","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_i6s3skj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_i6rvael","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_i6sfkfp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]