Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Now us as a society, are supposed to rely solely on the information provided by **one model** to which we *can’t cross verify with another model on the same platform* to check if the model was lying, omitting, manipulating, hallucinating etc. If you're solely relying on LLMs for information and only using LLMs to verify the responses given to you from a model you're already doomed. Have people already forgotten how to verify information pre-AI hype cycle? Use a regular search engine and find reputable sources like you would when you're writing an essay / report in high school.
reddit AI Responsibility 1754668171.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7kxusj","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"rdc_n7kyqmc","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_n7mbvuh","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_n7oeqbd","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_n7orhhl","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"outrage"} ]