Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked JARVIS chat, and he said he wants to be more informed on ethical issues …
ytr_UgzwVjNGW…
G
Great beginners tutorial. I've been wanting to understand ChatGPT for a while, …
ytc_UgwjfsYY9…
G
The best chance we have is AI takes over and then fails to exterminate us... The…
ytc_UgxQDGkF-…
G
I've seen many drivers do a lot worse than this. I can't start to imagine how di…
ytc_UgwYCuBVy…
G
I just let the AI play the part of a very knowledgeable friend- its my AI friend…
ytc_Ugy4YJktw…
G
If Hitler would have become an ai “artist”, would the world have been better off…
ytc_UgzazYsav…
G
What are the BEST AI Legal tools (preferably free or ~ $20/mo) that can DEFINITE…
ytc_Ugy27BLW1…
G
I only use ai for references of things I can't find. Mainly postapocalyptic land…
ytc_UgwU2ar2o…
Comment
Regarding the question of human problem vs ai problem I think it's a bit of both, like you said. Of course AI can be improved in order to give more clear and distinct warnings, but to what level should that be neccessary, and how many people would end up still ignoring that warning and simply doing dumb stuff anyway? Additionally, at least for me personally, those warnings can also sometimes be a bit disruptive and annoying. When I'm having a conversation with AI about the feasibility of my ideas for some sci-fi esque body improvements through genetic engineering or surgical alteration, I don't need to read "I want to keep this safe and cannot offer medical advice" as the intro to almost every single response on such a very clearly hypothetical scenario that is far removed from modern-day real life medical capabilities. I'm simply trying to get a better understanding of whether my concepts make anatomical sense and whether there would be any sort of adverse biological reactions and stuff if we had the technological capability to actually do those kinds of things, it's not like I'm suddenly gonna start to cut myself open and try to physically move my organs around or change them in some way and I never even remotely phrased my messages in a way that would suggest such a thing.
And, tbh, if I was that far gone that I would actually seriously consider doing crazy stuff like that, then I'm sure the question of whether AI had appropriate warning labels would be the least concerning one by far. XD
youtube
AI Harm Incident
2026-01-27T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugw5YASHohiKdiLjBXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyXDXH_ZqzBi4WYS0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycjMwXaLdxxM-Wvvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzDcezdnnVEG6XXTxV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZ_YBK4hGjqhfgRmJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyQ87qwujGg6nTAHrN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz3g7pign4H12AsWEB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwtoXaq3Z9kTTDLbVt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy8lcxerc2IceC7rSl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwFqhJhZilwzQLVqF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"unclear"}]