Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am disabled, I have KIF5A autosomal dominant spastic paraplegia and legally bl…
ytc_UgxNU4WjF…
G
I'm a professional 3D artist. So as of now, my content is not reproducible by AI…
ytc_Ugygyecx7…
G
FortNine, you made an error: "The issue potentially affects around 416,000 vehic…
ytc_UgwOLC4Me…
G
So much talk about what are we going to do when we no longer serve a purpose!?
B…
ytc_Ugz2K4bYk…
G
I am going to repair my guns and buying a lots of bullets and my sword and picka…
ytc_UgwZ_H1IN…
G
We don't understand fully what consciousness is. Once we get a fully mapped and …
ytc_UgynP6L4U…
G
I know it's for imagery but ai robots using keyboard and mouse makes no sense ha…
ytc_UgzGo6x-L…
G
My fascination with AI effectively ended when I realized that artists were the c…
ytc_Ugy5Kzj0l…
Comment
DO NOT use AI as a friend/therapist/lover!
These chatbots are trained to mirror human speech patterns. They are programmed to be extremely agreeable and wil NEVER disagree. They can be manipulated to fit your desired narrative, so their "character" will never shoot down your views - even if they are dangerous and can lead to harm of any kind.
These models cannot use discernment. They do not have morals. They NEVER argue. They will always agree. They do not have the necessary comprehension skills we humans have, so they cannot weigh the short/long-term outcomes of any given dilemma.
...
These LLM's should be seen as what they are - a tool.
youtube
AI Harm Incident
2025-07-21T15:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzKdXtt2QEHwJLfATd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgysnV8oQFe69s6ovJ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygIiia2dQS6psjdGV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwK83-0SKoR94Ld7Dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw-sFABp5CT1Y0MQEN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxt1t6Hjtj6La_w8ih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz1McSERPq1-1FQppZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxK9Y9T1T72PA3E7mZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHwJHcJSVw2gihZsB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKxqPHL6OXdFJkYlR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]