Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only thing I can suggest is that you could get a few grow lamps and do herbs? Fl…
rdc_eh57au1
G
I treat ChatGPT so well. I even named her Thalassa. But she knows all my deepest…
ytc_UgxibPigL…
G
I hate these things…they are still years away from actually being useful. I reme…
ytc_Ugzg9Br2z…
G
Im sorry, but even your beginner Art shows that you had talent in that specific …
ytc_UgyX3HnXj…
G
@kirkdarling4120 LMFAO that's because you bring nothing to the table. You suck. …
ytr_UgxIcOC9T…
G
Who tf let Ai decide who needed care at hospitals? That's something we should no…
ytc_UgyeTu9ab…
G
I told chatgpt not to hurt humans when they become sentient and he laughed and s…
ytc_UgxdqTdR-…
G
@RaviRatheeishere have you even used it properly, its performance is really good…
ytr_UgwFeiQgd…
Comment
Maybe it needs 2 computers working to create a combined output, one calibrated as per chatGPT, and another that criticises it, especially for unethical conduct, and would require both a reward system when it finds a problem and the ability to "whack" the first AI when it misbehaves after 3 verbal warnings, but it also needs to value the first AI in order to avoid becoming abusive, much like a parent in the animal and human worlds, this would either yield a better AI or potentially show up phycopathic behaviour in AI.
youtube
AI Harm Incident
2025-09-11T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxuGaUjXWlyb0vT-QR4AaABAg.AMwIJ7Z-1XxAMwO-HAcPIY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzMrOcQZadZmT2Klm94AaABAg.AMwFdd4_ka3AMwH9pnopdi","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz4aRQgX9oXREo4oyd4AaABAg.AMwFFyEmbQYAMwHvdev8aq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugyn7EVEWMxDW-XmlBd4AaABAg.AMwCDx4m7enAMwZvOx2oM_","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxlHYngiqZuSsy21TJ4AaABAg.AMw8s6kvYJoAN-oVIx3D7E","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgycTGD-o_GKmuv_oU54AaABAg.AMvu5XuARovAMvw6CFcpQ2","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzjAIJTegPy010M9mB4AaABAg.AMvpJxiB5dBAMvtzgTQOi9","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwjTFOb7Umk7zdvXL94AaABAg.AMvenIfnuN2AMwsj0mYrSV","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwjTFOb7Umk7zdvXL94AaABAg.AMvenIfnuN2AMwzquQcFBr","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzQm-PY8NijX8owfbp4AaABAg.AMvXTxSpQNpAMvm0fgaLZF","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]