Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wasn't there already a case of fake kidnapping and asking ransom using the Ai-ge…
ytc_Ugy98b5ND…
G
Given the number of people I've talked to, poorly worded zero shot prompts witho…
rdc_jpti5s3
G
"your honor, this man is guilty. According to our law enforcement team, the AI i…
ytc_Ugxp-aVvJ…
G
Its almost like this guy doesn't live in the same world as us. "We wouldn't want…
ytc_Ugwhd1Z-j…
G
I really like those philosophical dilemmas. If you say "No they will never deser…
ytc_UggXNbZRY…
G
No it is not real art. The one REAL art is the one created by humans with no AI…
ytc_UgyIiDLh1…
G
And our government is pushing trade deals like the TPP that want to have our Ame…
rdc_d3r9j3s
G
Is it just me or is the only people effect and or care about deep fake are peopl…
ytc_UgxGjM_QZ…
Comment
I wonder... If this same chat transcript existed, however it was with a human being, rather than an LLM, would there be criminal charges, and what, specifically, might they be?
Any lawyers/law students who might help clarify this?
I'm not saying that it was 100% the reason, because obviously there's always a lot more going on that we don't see in his day to day life, but...
Would this have been criminal, had it been a human being saying those same words?
youtube
AI Harm Incident
2025-11-13T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzBHEFKnxKoP0p4I2N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugy_LE1yAlwHd_VJHnJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxCKhehTkVZ_B8U5j94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgycRFDp5INGroD2sDF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_iX4V5XNBKLcwhyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxZ7FfKcPxe2NLdzLJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxN-9ty6Ag2VATawU94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVtIwNQeFLMOAwKWV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzvKJJDvFNYx0XzucB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz43RdROkCJlwumYKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]