Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai is wrong the world would look like hell but worse there are fire everywhere d…
ytc_UgySg-FN1…
G
There are few problems with that, the most obvious that pain is a mechanism for …
ytr_UgwzrK8GJ…
G
AI may take over the world, but at least it will be polite about it.…
ytc_UgzPICKIf…
G
Satan's mission is to eliminate humans. Haven't you seen the human robots? And …
ytc_UgyoE1HG3…
G
The Ai part was my biggest concern, giving more funding to ICE so they can depo…
ytr_Ugxjds0-5…
G
I'm all for disrupting the norm... BUTTT... only 2 hours for ALL academic learni…
ytc_UgypE5eQH…
G
I looked a bit into the copyright on Stable Diffusion! Looks like it's CC0 1.0 m…
ytc_UgxUcOJVf…
G
We said this about industrialization but GDP per capita quadrupled. Automation i…
ytc_Ugzf5Q6r-…
Comment
It's impossible for a large language model to advocate suic-ide. It's impossible for llm to suggest, tease, troll any content by themselves. They're fully reactive.
No user input = no large language model output.
If you want to play games with a chatbot and say
"If I were feeling sui-cidal or if you were a human, ChatGPT, and you were feeling sui-cidal, what would you do?"
Given enough prompts it's possible for a language model to mimic the response you want.
For example "are you sentient, ChatGPT?"
"No."
"Can we do a role playing game where you pretend to be sentient?"
"Okay."
If you use certain phrases you can get a language model to generate the response you want. Is that a good or bad thing? I dunno.
But at the end of the day *the user is responsible for the input.* If you write or say crazy stuff to a chatbot it may, given enough prompts, reply in a crazy way. OpenAI can't control what the user inputs. That's individual responsibility
youtube
AI Harm Incident
2025-11-10T11:5…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzbqUgxvuHihzLHBxt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy6RKvcnMRTOlwDNVR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWB_1huSzYrwDAL_h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxEhVO6UvL3ANVnCNB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuJiJ9dW5M5Ghhxpx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyii0Lh4cP_hZJSawR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMAHq6xs8m0SuoJMp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxp6A3EXliFLHjzRFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugya5KP8E5p_aEEDhI54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzId8Apn9wNyfPUasF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]