Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Weird times where people defend obviously exploitative corps over their fellow c…
ytc_UgymcB2ty…
G
Ok....but make the algorithms to avoid Afghan, Yemeni innocent children & young …
ytc_UgzB9c5f2…
G
Software engineer here and I am confident that AI won’t take over Software Engin…
ytc_UgwsiBECx…
G
there are no programmers? Ok so you’re saying that no new companies are gonna be…
ytc_UgwSwvnY7…
G
I mean we could take the AI Generated images as an inspiration and then drawing …
ytc_UgzKywIuB…
G
Today I let my older brother try character ai and let’s just say no one should r…
ytc_Ugx3mTpDI…
G
Human are more intelligent than AI 😂 creators are always more powerful than the …
ytc_Ugz6epbQw…
G
Sunday, October 26, 2025 . . . Greetings, Everyone. This is one of the most grou…
ytc_UgztPnt4I…
Comment
Reposting from a comment because this seems like a common misunderstanding.
LLMs are not that smart. It's relatively easy to trick or pursuade them into bending or breaking their rules, as well as revealing training data.
Some sources:
- this DeepMind (Google vs Microsoft/OpenAI lol) research retrieved several MB of training data from ChatGPT with a relatively simple prompt : https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html
- Adversarial attacks on LLMs that can use random character or word injection to corrupt the output https://llm-attacks.org
reddit
AI Responsibility
1706970568.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_korf2at","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_koqdwt4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_koqmxbk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_koq46x9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_kouldd1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]