Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eliezer is extremely unwise over this subject, does not understand a number of k…
ytr_UgzNSeFjL…
G
@duluozah the technology is not wrong. training an LLM on data you don't own is …
ytr_UgwksaOVr…
G
Technology is unstoppable, and pretending otherwise is foolish. The greatest dan…
ytc_UgzBSKJoq…
G
The problem is that this precedent would single-handedly burst the AI bubble and…
ytc_Ugwai8L92…
G
True but that's why comments, unit tests and few other stuffs it excels and also…
ytc_UgzTH0c9w…
G
>the story doesn't need to be great
They *should* require that the story is …
rdc_jipw4fs
G
knowing the track record of corporations who boast about their tech, saving mone…
ytc_UgytQ5KlQ…
G
>training any kind of model with data like this is almost trivial
Are you sa…
rdc_fcsugvl
Comment
> This tragedy was not a glitch or unforeseen edge case," the complaint states.
Actually yes it was. And it’s funny that many of these outlets are leaving out a key fact.
> [The watchdog group found ChatGPT would provide warnings when asked about sensitive topics, but the researchers state they could easily circumvent the guardrails.](https://komonews.com/news/local/absolute-horror-researchers-posing-as-13-year-olds-given-advice-on-suicide-by-chatgpt)
As much as I hate AI, ChatGPT warns users and even refuses to elaborate on sensitive topics. The teen went around that safeguard. And even when you do, ChatGPT still warns users.
reddit
AI Governance
1756863411.0
♥ -2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nc3t7fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nc32b0d","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"indifference"},
{"id":"rdc_nc4af27","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_nc789h9","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nc3diu5","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]