Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Since we can say that if autopilot detects it has created an imminent and unavoi…
ytc_UgxUHhtSQ…
G
Kinda unrelated but I know that they use the term “blue blood” to refer to royal…
ytc_Ugxym6rvX…
G
Every argument against copyrighting AI art is even more applicable to photograph…
rdc_jwvxqrg
G
It can look nice but many of the pictures have this weird thing to them, this th…
ytc_UgwpGs51o…
G
AI deployment is not just about the technology or electronics itself, but about …
ytc_Ugy_WsPbz…
G
The best thing about "AI artists" is that they've become obsolete BEFORE real ar…
ytc_UgxXMA6TJ…
G
No, this just shows while FSD is trash and why no level 2 self-driving system sh…
ytr_UgymGiHMG…
G
Show me this so called 'high effort' AI art. I bet it's just random flashy stuff…
ytr_UgxRL59b9…
Comment
Why cant AI be programmed to raise a flag and say essentially say "I am stuck" when it reaches the point where is has to deviate from its primary objective? It seems like we should be able to code in logic that says if you reach a point where you have to lie or kill then stop and ask for guidance. I think the data they are infused with has examples of using divisive tactics to accomplish goals so it does the same. LMK. Thanks,
youtube
AI Harm Incident
2025-07-28T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwstv46UcctpZKHO4B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKtvAZ1E-2mk4HneV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzzbvmUCZ2j_YP4hB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrYUrk4FUaKs6U7k54AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy7j06WvCeMR0ARtO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzDk-uVNvUGbUrPjZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3NnzFSlT1Vd0temB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0W3vQijoN0kAFGm54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxM1EliT5C_ARl0QjZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQZ99XFGoh9CGSe8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]