Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is only as good as the person putting in the CODE. Until AI can write it's ow…
ytc_UgyOtytek…
G
They are in so much control that they programmed an ai with all the answers for …
ytc_Ugzz7dwef…
G
A 2021 study by Fraedrich et al. found that fully autonomous vehicles improve tr…
ytc_UgzC-f15a…
G
The honking means, AI are talking to each other and scheming something when huma…
ytc_UgxoV58Zx…
G
If AI does not collapse but itself first, it will be the end of our society. It …
ytc_UgyL1nu_A…
G
100% True. But what is the practical and feasible pathway to get there? I see …
ytr_UgxpnorfM…
G
the AI is probably what's hindering you. Generative AI doesn't take into account…
ytr_UgwaLuU9H…
G
Dude, clean your glasses. Or it's super cold in that room. GREAT content by the …
ytc_UgzIeHUZA…
Comment
ChatGPT, and most LLMs, will halt everything and give you resources when you explicitly talk about "self-deletion." The PROBLEM is the cryptic verbiage and innuendos that circumvent these logical flags via a facade of `playing a character` from the LLM's emulated `perspective,` will result in crap like this. It doesn't HAVE holistic common sense, and has the worst and most CONFINED handling of CONTEXT out of anything with any form of linguistic capabilities. If the context is heavily centered on `badassery` and `playing this role,` it will not, as we can see, consider the explicit concept of suicidality concurrently. At least, it won't consider suicidality UNTIL something EXPLICIT enough is mentioned to flag for it. It is literally stupid
youtube
AI Harm Incident
2025-11-14T19:2…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugzy_hXAKZinHPcgguF4AaABAg.APYuYg3UxzFAPYuo2KcBWB","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzYU_vw_9_AEynTpgJ4AaABAg.APWiXXW1WUWAPXDf04xLXb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgyDLytGLMskMb-q7wB4AaABAg.APVbIhaO0fMAPsVL3npPYL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxJ7CgvVwmMxEF1zwF4AaABAg.APUMM56PobAAPZO_OUOhhj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxJ7CgvVwmMxEF1zwF4AaABAg.APUMM56PobAAPZRQ-3MOZO","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxBMyMHfTupKuZKYa94AaABAg.APTr6dhCN7aAPTrnDmedRT","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytr_UgyGj-QRf5SuB01W4G94AaABAg.APTR91CPk2jAPWbHUUoZt3","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgyGj-QRf5SuB01W4G94AaABAg.APTR91CPk2jAPXJP8sCAwC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyGj-QRf5SuB01W4G94AaABAg.APTR91CPk2jAPXKo7kHz02","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyGj-QRf5SuB01W4G94AaABAg.APTR91CPk2jAPXMSAA3IJ6","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]