Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As screwed up as it is, technically the AI was right when it came to the crime t…
ytc_UgxKfg9fw…
G
Cry about it.
It works to create new images.
It's beautifull.
you listen to a…
ytc_Ugx65Yx7r…
G
@laurentiuvladutmanea "But these programs are not somebody, and are not capable …
ytr_Ugyz8bIyN…
G
Exploring the limits of AI like ChatGPT can be fascinating, but it's crucial to …
ytc_UgxUqeF3o…
G
Super intelligence is a scary concept. It would take a tremendous amount of powe…
ytc_UgyvC_tNx…
G
Surely, a moralistic, ethical code should be installed from the very outset crea…
ytc_UgwrakQna…
G
Technology oligarchs enabled by AI will probably destroy democracy and drive the…
ytc_UgyV0hV54…
G
I'm worried humans will be used to make AI like in the Matrix, because when they…
ytr_UgzE5c4Fl…
Comment
If the chatbot had been patched it wouldnt actually know it had suggested someone take it in the past because chat information isnt shared between users and it wasnt part of the bulk training data yet. You can usually ask your chatbot when its training data is current to. When an AI chatbot is patched it just has information added to its prompt eg. 'if someone talks to you about chloride make sure you tell them that bromide isn't a safe substitute'.
What you are seeing is an artifact of the way chatbots can do a live search and summarize results, those findings will not change your chatbots underlying training or prompt data.
youtube
AI Harm Incident
2026-01-07T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgznktxIXEyR9slrL_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0CErlTu0HFECd5sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgweDu4YEPEdUAiAWAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZ_5bu_BUowRAo3DB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0pSaDxbyHsa8wcuR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydQgw55Ctr--T_3bZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgYgsxC9MufnkMPGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZVdx8uXbM4dGWAWt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK1gxtCBZeRUl0nWl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyJV9BInj7UI_MHIyZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]