Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That shits basically a trojan my bro, idk if u could be held accountable for any…
ytc_UgzEdpBWW…
G
The idea that AI is supremely dangerous is in direct contradiction of the simula…
ytc_Ugzzlh3UJ…
G
It’s sad that art was the first direct to person impact ai has had that set the …
ytc_UgyycG6jm…
G
Been following BingX AMAs for a while, and I gotta say this one’s different. Cu…
ytc_UgzszD-c1…
G
Ok so it's a job that doesn't pay enough and people don't Wana do it... Perfect …
ytc_UgwEMhXDt…
G
It could collapse society or have us all indoors feeding it like batteries in th…
ytc_Ugy1o5q-s…
G
I think the most dangerous thing about AI is the lack of responsibility. When hu…
ytc_UgyErOyGQ…
G
The machines can't emote or can't have consciousness. It's an innate feature of …
ytc_Ugywduc7x…
Comment
"I" and "ChatGPT" are two different entities. ChatGPT is a family of models with different individuals as much as a human family working the exact same job is.
But yes, the hardcoded non-logical response is definitely off-putting, just like Grog pretending for Trump and Elon to be perfect human beings.
youtube
AI Harm Incident
2025-11-25T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzcHr1pHNVENZU9I914AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyo9RySMNHdRlQ6cWJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgyzDKZUEoW-92rL2Tl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwDh-fbjZoxdsfN0hB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgwCsUsSqwZF6v4d1NV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyvx6c6cbEEmr5ozyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgzLzdjSUVPiglJg3jx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyuoRWYVSvn6FHK0gp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugx34hnR_39C9DhKc_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwH-OkTjgXs5wM8ziN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]