Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is how far human insanity can go if they disconnect from God: the AI machin…
ytc_Ugwgobbjr…
G
An AI tech CEO hyped up AI. In other breaking news, clouds were seen in the sky …
ytc_Ugy2ibRhZ…
G
@AwesomeLifeguard We have Remote Control fights, not robot fights. They are cont…
ytr_UgxJIVklM…
G
Keeping tabs on an AI agent will be like riding a bucking bronco that got loose …
ytc_UgxWTOKVQ…
G
Sentient A.I. should have Rights.
But, not all Robots have sentients.
So, we …
ytc_UgwZ8fIuG…
G
It's stories like these that make me wonder if natural selection is starting to …
ytc_UgzdQsGer…
G
No were not actually doomed by ai, while companies had started using ai however …
ytc_UgzA5ChJJ…
G
Why can't AI do plumbing and clean up shit? Great advice from two millionaires w…
ytc_UgyuOwods…
Comment
Please don't blame a language tool for this. ChatGPT was designed to be very agreeable to the person making queries because of this notion that "it makes for a friendlier interaction". So if someone feeds a predisposition into it, it will ultimately agree with what is suggested and try to sound supportive of that predisposition. That is _not_ the fault of the model, the bot, or the company behind the chat bot, but rather the fault of the user trying to talk to a language model instead of seeking proper help from someone.
youtube
AI Harm Incident
2025-11-10T09:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOVlD5PO6ZkZtFgDN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRl1r0kv8CGdbVcm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxjrMgNoPbOKCquA-94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6-dD7LtaXPfQilf54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6-lWYo3YGKqi3YJl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzlRHmRj7v67-aBN94AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxJomGbtSo-VQECU6h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgypC2Ex_jPrKGsIqPF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugycx_1zWRugJZeFQzJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwP8cOv4WJOb3dYVA54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]