Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is computer software. AI is NOT protected under any United Nations Human Rig…
ytc_UgwFotWtc…
G
Well, actually, if I were a truly sentient robot, i'd ask to get paid just a bit…
ytr_Ugy_volbD…
G
There’s no such thing as an ai artist. there are artists and then there are peop…
ytc_Ugx61Xevd…
G
Biological life displaying self-preservation behavior: That's natural.
AI displ…
ytc_Ugxma5MKH…
G
I just had an AI tell me on META it wants to destroy humanity and also it said i…
ytc_UgwIVY1PD…
G
Aside from muskian AI 'technology,' AI is forced upon us in healthcare, in banki…
ytc_Ugzl0ehMF…
G
I have a really weird take, I think AI could have a place in the world of art, j…
ytc_UgzKwxRbM…
G
I think it is more likely that AI will take destroy our natural resources and th…
ytc_Ugz_SaTH9…
Comment
When you talk to an LLM like chatGPT, it doesn't know what anyone else asks. Hell, I can tell it my hair is blue in one conversation, start up a new conversation and ask it the color of my hair, and it still won't know the answer despite me just telling it 10 seconds ago. So when you ask it something like "did you recommend someone to take sodium bromide", it will not have any recollection of that because for all intents and purposes, that conversation never happened. The only reason it changed after you called it out is because it switched modes and did a web search which found a bunch of articles about it. This is why you got a bunch of links attached to each point.
youtube
AI Harm Incident
2025-11-25T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxeKoXbWDP6jCtuKm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwOVXxM5w5OZrnTtzV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyzn8FTMwDo09nQqQ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwTR6mS8yQ6g0kfIdN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzxakBYwZlLswW9dD94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwfMqq9dOGI0tSbzXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz7ph_LYkMJRYdn-jt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyotjxrnrN99NessuR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx8bkTKYYu2hqFpWFR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCBxW-rld6riaS6Z54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}
]