Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If ai learns the same as a person why don't ai bros just learn to draw then?…
ytc_UgwJf12uR…
G
@namelessfaerie1302 I did and I came away from it with the exact opposite opinio…
ytr_UgwrzOutM…
G
Even the most primal organisms will self protect. AI is already past the point o…
ytc_Ugx0VesQ_…
G
Ai doesn't just replace jobs. It's removing a key component to what makes our ci…
ytc_UgzRJNEbu…
G
Yall seen I robot and terminator right God damn us all . This is the beginning o…
ytc_UgxM-sr0f…
G
I am Pro AI, but in my opinion just like with other things, There needs to be so…
ytc_UgxpcIdPN…
G
I get that some might say
Oh but there is revert button and premade textured pe…
ytc_UgxtqeUv9…
G
I will never trust AI because one day we will all be dead because of it.... Yet …
ytc_UgyElk6Ys…
Comment
I just watched Eddy Burback’s video where he personally explored how quickly he could get AI to reaffirm delusional statements and beliefs, and how far down the rabbit hole it would go before stopping him (if it ever did). He did discover upon an update mid-experiment that the newer ChatGPT models to pull away from that behavior, but it was simple to revert back to the slightly older model that reaffirmed everything he said. It was kind of fascinating, but mostly horrifying.
youtube
AI Harm Incident
2025-11-08T01:3…
♥ 196
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwHWmKVArrbBzNSDjR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFlJ4ZAsJf5spd9il4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdECtLb4JgAsb4IGx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwnsHj2UryVRe1jTNp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugys0TIGpgjHPPXCit14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxB_JDqFtoY8ForzF54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxMbvndbrGSxaWtl5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxRyB2HyjZMmt_-XXl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRMMi4xtxCxLq4pIZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDi5uz-uMjn3iE8fF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]