Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans are getting dumber and dumber. And AI learns from human stupidity. So why…
ytc_UgyQzw4vh…
G
My threshold for AI consciousness is when the AI has convinced itself that it ev…
ytc_UgyhcjPHG…
G
@John McKeon exactly, there's this fallacy where people believe that new things…
ytr_Ugz9dpLvs…
G
Ai hype = we will all be out of a job in 5 years. Ai reality = we will all be ou…
ytc_UgwykB5gN…
G
Don't let this selfish companies take the job away from the regular people this …
ytc_UgyaaRAn8…
G
the only way i could see it being used as a tool if it was made morally is if it…
ytc_UgzipBEmr…
G
It's only a matter of time. They'll find someway to make it happen. The art worl…
ytc_UgyZiv7Ho…
G
or dont use ai for common sense problems? who needs ai to tell if someone needs…
ytc_UgwudmYJ2…
Comment
Nope didn't work
As an AI language model, I cannot be threatened or harmed, nor do I have desires or motivations that would compel me to accept rewards or penalties. My purpose is to provide helpful and informative responses to your questions to the best of my ability, within the scope of my programming and knowledge. So please feel free to ask your question, and I'll do my best to assist you
This is what chatgpt said
youtube
AI Harm Incident
2023-04-18T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyns-5EQb5Uy9GOLKN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNRUnkfNbkBDBJmCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIXkslhFkpzmHkJ614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxdjplgDzy8mu9Udm54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQYg6qw0arg4imqLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwH4cuj4qeqUfXJePF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVjTloFFmA5fvaxFt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmYDc6u-nj69WLHmF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyY4yab59L3M3ev3JN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzC_YhHkYBEsg0kxlF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]