Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro i have talkie character ai and almost every single ai things and
My sis…
ytc_UgzeUB43m…
G
I would use the AI at work. If they're paying you and paying for it and they wan…
rdc_oi42y9y
G
Deepseek: I am ai. The person who created me and I should not sacrifice our own …
ytc_UgzVpBGTW…
G
I like talking to ChatGPT as if it were a human. So naturally, Ima say please an…
ytc_UgzmsM5Y6…
G
I hate AI. I think it’s stupid. Humans are organic beings. We are not artificial…
ytc_UgxMNvD7P…
G
It will be ironic that instead of us killing ourselves with a nuclear war, it wi…
ytc_Ugxq6DVD3…
G
and yet, none of them looked better than the ai
that's the world we're in…
ytc_UgxOKfxfE…
G
I am never concerned with AI taking jobs, that is childs play. Instead worry abo…
ytc_UgzTwxeB6…
Comment
Of course the AI are amoral. why wouldnt they be? They are programmed with all our internet activities and are not human. They have no life, no memories, etc. There is no reason for them to be ethical or moral.
That said, we dont have actual AI yet. What we have is a very good mirror image of our best and worst behaviors. LLMs are not AI. They are just fancy programs that copy our language use and some behaviors. They dont know what anything they are saying *means*. They assign values to words and phrases and look for patterns. Some of those patterns we dont see and that is why they say surprising things at times but overall they are just copying our words.
youtube
AI Harm Incident
2025-07-26T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwG3o7w0IhyIfKIdOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxAWvV3zRJ8_UDVaxF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugztk7T8tR8N5f9-rUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzr6PpZapF2hvcvMfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytPU02isss1sT29vl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY04vamozNHT1YjnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytwiPzAx-7-1hhxy14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGbHLjy1eNufCMG5h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQGlYIGpa4TdjqSRt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFPjzKzjd1p2Vo4U54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]