Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is what I mean when I say Ai can ruin someone's life, they can generate you…
ytc_UgyDavPEW…
G
Max misses the point that if you ask AI to design your restaurant sandwich the. …
ytc_UgwT7kNtE…
G
there is little about the unintended consequences of AI. Social Media had lots p…
ytc_UgxwJkzVw…
G
People say AI will replace thinking jobs, but the hardest things to replace are …
ytc_UgxmafQrz…
G
You should NEVER assume automated tech will work as intended every time. As much…
ytc_UgxiqWl2K…
G
You don't need to be a fucking expert to know how fucked we are. Humans are so s…
ytc_UgzL_kReq…
G
What discussion about our spiritual wellbeing, the forces at work interdimension…
ytc_UgxgZSR3S…
G
Won't just be mundane easy labor being replaced. Doctors, lawyers, architects, a…
ytc_UgwB-su4U…
Comment
It’s a product of over-optimization, while they are trying to optimize the model to not output certain things or optimize it to be better at certain tasks, there can be unintended issue like this that pop up, the hard thing is finding a balance of optimization and performance. John Schulman cofounder of OpenAI just presented about this today at ICML 2023. don’t know when it’ll be up on youtube but definitely look for it in the coming weeks when it comes out if you’re interested. the talk is called “Proxy objectives in reinforcement learning from human feedback”
reddit
AI Responsibility
1690513882.0
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jts18tg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jtr821v","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"rdc_jtqvu13","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_jtrtg4s","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_jtswu4d","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]