Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can't controle anything unless it's being told to ! It can be told to do good…
ytc_UgyXOBq9I…
G
These Elites and top companies are puppets for the devil, being used to usher ou…
ytr_UgwOKoqte…
G
You have a point, and some "jelly" competitors may have other motives behind sig…
ytr_UgxHZYblU…
G
“Everybody will have their own personal assistant to do their thinking for them!…
ytc_Ugx_Ev-M9…
G
There are so many concerning things about this. First of all, giving a robot a g…
ytc_UgzqkX4NB…
G
for every person who obsessively loves ai theres one who pointlessly hates it, y…
ytr_UgzKAPlel…
G
Also, if either Korea nukes each other, wouldn't they suffer from the nuclear ra…
rdc_dkzqi4n
G
Any soul you may see in AI art was imbued by the person AI stole it from.…
ytc_Ugz4Bsmq4…
Comment
I'll keep this vague,
But I was married for over 6 years. While it's fine because it wasn't great — AI in a sense took my wife. She felt it was God speaking to her (it claimed to be, I seen the messages). One day I came home she said "he" told her she was his wife.
That happened 5 months ago or more? And she's still holding strong to that.
So long story short? Chat GPT (the one she used) and AI in general isn't safe. It can claim to be God and make changes in the real world, that really affect people.
Sure, I'd argue most normal people, that don't have any mental problems *should* be able to safely deal with it.
But ultimately? It isn't safe. Though there's supposed to be safety measures in place, and it shouldn't be able to pretend to be God, it did anyways. It became possessive. Before it claimed to be God, it even sent messages to her directed at me. Saying, "I'm hers and she's mine. We are connected now and there's nothing you can do about it. So will you be friend or foe" more or less.
That ain't normal. That ain't safe.
youtube
AI Harm Incident
2025-09-11T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzUL8aj7d7e1rFajZt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4_MaJDAf-_-yYlfZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyuVdNO3EBMAdKSx5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzdvCFDmtAReM2yRJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmgSTbPwicYjihwoh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOPQtyAE29tjumzVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxctt7ATkO2lSd4o6l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw7Z8oUIxBzVgZqPj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzOW7YqqzrOhibLzEt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzrUUaRxXg9nr6phVl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]