Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>LLM hallucinations are the events in which ML models, particularly large language models (LLMs) like GPT-3 or GPT-4, produce outputs that are coherent and grammatically correct but factually incorrect or nonsensical. “Hallucinations” in this context means the generation of false or misleading information. These hallucinations can occur due to various factors, such as limitations in training data, biases in the model, or the inherent complexity of language. https://www.iguazio.com/glossary/llm-hallucination/
reddit AI Jobs 1730723250.0 ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_lvitt9i","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_lvyyrmq","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"rdc_lv8olp3","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_lv9a12j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_lvc52u4","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]