Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Having a computer program detect fakes is good, but it doesn't really solve the problem. If a human doesn't know whether or not to believe the video; then they also won't know whether or not to believe the AI's analysis of whether it is real or not. A person could make a fake video, and also a fake analysis saying that the video is real... So now do we start training AI to check if the deep-fake test is a real test?
reddit AI Harm Incident 1651313903.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i6s6o1y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_i6rk0t4","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_i6ru287","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_i6rwq8f","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_i6rx8fi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]