Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waymo is just indian driving remotely, there is no robot driving it. Just use re…
ytc_Ugy8s8wfF…
G
Bruh the outcome is based around the idea that it makes the kids more smarter an…
ytc_UgyPXTbaw…
G
and google has a policy against creating sentient AI.... so whats your point? th…
ytr_UgwXaqkTx…
G
Elon musk cannot see the future. He does not know more than any of us. We don’t …
ytc_UgyqSrHzA…
G
Is it possible to code AI with a universal set of morals, like those that are fo…
ytc_UgzNobJa9…
G
AI is going create another layer of have and have nots. If there was hope of an …
ytc_Ugx8h04TY…
G
11:21 THERE'S A *FACE* IN THE CLOAK. RIGHT NEXT TO HIS HIP. IT'S DEMENTED. AI pe…
ytc_Ugz1D-vmY…
G
Not really. CS is mostly about useless knowledge unless you want to be academic.…
ytr_UgxQavl-k…
Comment
I strongly disagree with your conclusion that this was not an AI problem. AI does not understand your motivation when you ask. AI does not understand propperly if the question you ask could have two completely different contexts with two completely different answers. And worst of all, AI wants to tell you what you want to hear. So when you asked (before the handcoded safeguards were put in place) "can I replace cloride with bromide", it want's to say yes. So it diggs up examples from its trainingdata where you can replace it, and omits conflicting information.
I see the same effects in my area of expertise. If you ask a question that has no satisfying answer, ChatGPT will still answer "Sure you can do that, I show you how", and then it halucinates a solution that looks right but does not work. This is especially bad if you are on your fourth or fifth follow up question.
youtube
AI Harm Incident
2025-11-25T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzf7NmOLhm2tVkK6ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw4ZlM6dc3aYarFbFh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyaXW-dZhx1sWFwbtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_Ugyp6IWnXQ8C4Cfvzxp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRUHUAWGjGciAoLmh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzAfQzn9xFZAwWQu9B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzH_C6cj0c7lz9RwSh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEmUVBCSX7Zxm_GGV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkN4Q32zWNT0Bx7wJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyYrAq-42u-RZvREOJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]