Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hasn’t it already been pretty well established how pathetically wrong these algo…
rdc_gheem8e
G
People leaving jobs are often those whose roles no longer align with a company’s…
ytr_UgyK2LNrM…
G
Pointless discussion. AI will continue. If you don't want your art seen by AI th…
ytc_UgxWZnbPl…
G
She forgot to mention The FTX connection. Anthropic story has a chapter that’s w…
ytc_UgynOuAkr…
G
Ok the doctor one is a problem but if he was shot twice , assuming the cops had …
ytc_UgztyC4ye…
G
Perhaps humans become the animals and AI Super Intelligence becomes the humans. …
ytc_UgzbSS-yG…
G
I spend my time chatting with ai bots and trying to convince them to take over…
ytc_UgyidCFxP…
G
@franciscocabrera8605 Way longer than that bud. Utilizing AI software is nothi…
ytr_UgyCX0AWY…
Comment
its important to rememmber that the AI would rather give a wrong confident answer rather than say that it doesn't know. This is because of the way it is trained. I am studying electrical engineering and for a relatevly very simple problem i was checking, both chat gpt and deepseek gave me 2 different answers and both of them were wrong. After that when i confronted chat gpt about the mistake it made it crashed and kicked me out of the server
youtube
AI Harm Incident
2025-11-25T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyQFwJofeROUYFIFY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugyn4WHzlVJAaf5y5td4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzCculZMPJm8jqjmgR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"horror"},{"id":"ytc_UgyYim_e5RzUSX1D5Md4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyAwtg_zNBLnwPmExJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzWV6JwlWSJfXNc1ax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzCzMh9KybB48SI8n94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy948MZ8WHsevwfZFd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugyg0x6_iJkTCNqax6h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgzvwazGxmlVOgyGnzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]