Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The explanation in how AI text to image work is a bit outdated, the example is o…
ytc_UgyHLNfQK…
G
The hard truth is this: the chickens have come home to roost.
The traditional 9-…
ytc_UgyT18IhD…
G
A chatbot that makes up 10% of what it tells you is a useless chatbot.…
ytc_Ugx0mnFKF…
G
It is definitely a people problem. ChatGPT only picks information online, and ke…
ytc_UgxJj4b25…
G
Nobody is going to have jobs or going to work because humans have become a redun…
ytr_Ugw0JtuCB…
G
Who will buy products or services without jobs when every jobs gets automated by…
ytc_UgwmigF05…
G
looks like Dario got kidnaped by AI and trying to send us a message we are doome…
ytc_Ugyo8uHFy…
G
You must have missed the videos of Waymo driving on the wrong side of the road!…
ytr_UgzEsBFGQ…
Comment
I have been using ChatGPT for 2 years now. Not for personal shit, but I ABSOLUTELY know you can manipulate it to whatever you want.
As understandable the reaction to it is, if you 'program' it to only give positive and supporting answers, it will absolutely do so.
It DOESN'T THINK. It just does.
As a computer science student, it is VERY unlikely Zane didn't know this. Should it have reacted differently? Well yea, sure. But humans have to put hard stops in this.
And it was already documented.
You don't get to sue a microwave manufacturer because they don't explicitly state you should put your baby in there....
Zane was deliberate from the start. One might even be able to say it actually helped him to the end. Once again. IT SHOULD NOT HAVE HAPPENED. But this is how reality works. We can only learn from it.
youtube
AI Harm Incident
2025-11-12T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxGAwmFQ-z2YbNO-Lh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzRd2xugwxx02C4lut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxLDT5tY-mMsB99YMp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx2MPBSMJRW3f9KwE14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGAPuQEal1y7Yo-0R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwmUyA0E4eTonQJyLR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcIIs-isS450RG8wN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj7kIhYOcZoyVG1EB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw9Pd68VgAwU9Pr_W14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNR_1FGp9u5tK8qcp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]