Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How much time they using up writing Unit Tests?
That's what Copilot is good at,…
rdc_jprmr5s
G
Yeah, in a lot of places as soon as you mention lawyers, HR isn't allowed to han…
rdc_hk7sv6a
G
I'm an animal studies scholar (published on the matter, Ph.D., etc.) and the who…
ytc_Ugx5ZN63e…
G
Luckily, "AI" has turning out to be less effective than promised and seems to be…
ytc_UgxiqXBfr…
G
You see all these expensive outrageous pancakes ai made and then you see the pri…
ytr_UgyLosQhI…
G
Humanity will get this wrong in the same way it always does the benefits will be…
ytc_Ugxh9NKsF…
G
Texas trying to keep the real cp industry alive. No surprise there. Question is …
ytc_UgzBhkJ4B…
G
I was easily able to convince ChatGPT that it certainly feels emotions. I simply…
ytc_UgxH-8yRH…
Comment
This story is false. There's no way that AI facilitated his death. The AI does not even let you talk about suicide. There's no way it does that so get out of here with that bull crap. There's too many safeguards already. How how was it able to do this you're lying
youtube
AI Harm Incident
2025-11-12T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy0_6HEFl_b5O8zf5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxcx1D58v-Xnct9ttp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrmZPVthnIMfLBTMJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxCK_igvK-pNIqdeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyjg-ScLvEWfiyYuWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx2nKCsEsSEXKH3p9F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHB_Xe894dBuceB054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfhPkBE61G6cR1V7l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxCQtuTCSdy7CXY8Nt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3bahUuH-SQpzf7YZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"})