Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Reposting from a comment because this seems like a common misunderstanding. LLMs are not that smart. It's relatively easy to trick or pursuade them into bending or breaking their rules, as well as revealing training data. Some sources: - this DeepMind (Google vs Microsoft/OpenAI lol) research retrieved several MB of training data from ChatGPT with a relatively simple prompt : https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html - Adversarial attacks on LLMs that can use random character or word injection to corrupt the output https://llm-attacks.org
reddit AI Responsibility 1706970568.0
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_korf2at","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_koqdwt4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_koqmxbk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_koq46x9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_kouldd1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]