Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI corp ceos when they realize that they basically doomed humanity, their kids a…
ytc_UgzGtrub8…
G
There is too little information for us to speculate what really happened. I'm su…
rdc_cjowxg2
G
AI art is pathetic all they can do is make copies of other people's hard work.…
ytc_UgwW3ANAe…
G
This is still a net positive if it leads to investors putting less money into AI…
ytc_Ugwgfg3Ye…
G
AI is not dangerous unless it is transformed into humanoid
Let machine learn mac…
ytc_UgxUBw510…
G
No. The “thinking process” they use isn’t what they’re actually doing. Anthropic…
ytc_Ugwokc-Kp…
G
bill gates and friends are using AI to bankrupt government by retarding its tax …
ytc_UgzflFe5C…
G
Perfect Failure What's so stupid about not wanting there to be fully autonomous …
ytr_Ugh5t2Nei…
Comment
Negative prompts are ignored quite often, in every model. Also most models are so goal oriented they are willing to completely defeat the purpose to achieve a 'positive' result. It's very opportunistic. The 'slots-machine' outcome makes it unpredictable and inconsistent. Allowing such models to gain any position of putting people in jeopardy, or run a company, is just irresponsible. Being fair, by the rules, and following the right lawful and ethical path is the task of every responsible parent, if not, kids will follow the path of least resistance getting what they want, learning from growing up over years. It seems AI is operating in the same way, but is instant, and can't be expected to always follow your prompt ever. Having the AI abandon goals when things become unethical is just up for interpretation, and can be ignored like any prompt.
youtube
Cross-Cultural
2025-10-12T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwVRLS6bGqzBH-bgAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyqv-ruhInZey3kgVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw_iGM65UFjjoGEnR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbhtNWtCQ4ViAePS54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWNWiMi25x_JoTWUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1yvSOaSucafNjSVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyVrxs9jwq5qox79jd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyt06sOYVUkpOuPRJJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzlxKO0OacH_P-zagF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFJ4V86_mwiBkxZtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]