Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Something tells me this is extremely dangerous. I was waiting for that robot to …
ytc_UgyjcihGp…
G
The only thing wrong with AI is the users.
AI responsibility
Chat didn't TELL hi…
ytc_UgwWVaRnq…
G
What you are hearing is merely sophisticated MIMICRY of human behavior. You are …
ytc_UgyeoTu6a…
G
Dumbf Chatgpt is programmed to agree with you and tf you mean didn't exist, wher…
ytc_Ugw1001_v…
G
Honestly I'm so tired of the "Well by YOUR logic.. Digital art isn't REAL art, h…
ytc_UgxvzKz1V…
G
The fact that this is being developed despite the risk to the entire species kin…
ytc_UgyGMUSQ0…
G
I mean, if you went on the front page of deviant art and used that to claim huma…
ytc_UgzpeKyGd…
G
To them this would be the norm till highschool or college then they wont know wh…
ytc_UgxYagyS2…
Comment
Those managing chatgpt had announced that chatgpt will no longer give medical and legal advice. There is no precedent of who is legally responsible if the wrong advice given by AI caused real life problems. I think those managing chatgpt don't want to be made an example of and trying to cover their behind and didn't make the new policy for any moral reasons
youtube
AI Harm Incident
2025-11-24T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx46HsdO5vB3f3on0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwfH-pFbFfS4mB2aDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjngIgVcdaWcn8-aJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwORH6fT1daDN0207V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxz6f_Kiag-g-7EInp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwooL8oW3IFRvo7QXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZe4AzSx1e5hOwZK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyKpQ0-yopz0ZFUWqR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVOVcdoXAtM06Ro3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZb5tB5jvL0bdi0YB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}]