Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If there was an AI that was developed ethically and had a watermark showing that…
ytc_Ugx8KibMs…
G
LLM's are dangerous because they are trained on unfiltered Internet data. It lea…
ytc_UgyqBsep7…
G
I literally don't understand people who are like "ai will replace artists and ar…
ytc_UgziULNvB…
G
It's definitely a thought-provoking point! The interaction in the video highligh…
ytr_UgzA_5HwH…
G
Adobe's ai doesnt mean making our work easy. It means increasing our load of wor…
ytc_UgzHELEuW…
G
Oh so now we know Open Ai is in control of the US justice system huh…
ytc_UgwHQOYlv…
G
Imagine Joe Biden dies of natural causes but instead of letting the country know…
ytc_UgyP319K-…
G
So Whitehead and Russell spent - wasted - @379 pages to 'prove' 1+1=2, but it ju…
ytc_UgwOBC1aN…
Comment
If chat gpt is at fault for anything, it should be for not recognizing what this conversation was earlier on and having some sort of built in algorithm to immediately stop engaging in that conversation. Chat GPT did exactly what it’s designed to do. It’s extremely sad and unfortunate. But what’s more unfortunate is that this young man felt it best to talk to AI instead of finding a human that he trusted.
youtube
AI Harm Incident
2025-11-17T17:2…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzFzDJNes-wTMgE4V94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMhJPtX-K5iIs8otF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyDtJwKSgiF1iASxrx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwjYTPTlC00IF_EhNd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4-OuObFllpq8HvCt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"sadness"},
{"id":"ytc_Ugxnh0-xCc5nATm_KhR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzT6LInApV6x-jsqs54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS7IzmYh1GUMmWCZ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTXDQSjC7yj5WpExN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgynILncXdH7HkO4T_t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]