Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, a computer has trouble picking every square that has a traffic light in a re…
ytc_UgzwM4ag8…
G
This is change and nobody likes change or different, we are terrified of it. Thi…
ytc_UgyxKsb1O…
G
Unbelievable, Cops using AI to violate someone's due process rights! AI is not a…
ytc_Ugykbd6qb…
G
Challenging Robot specially in Boxing isn't good Idea it should be banned.. some…
ytc_UgxH5lqDc…
G
The self driving Ford Explorers on Jurassic Park worked fine until the power wen…
ytc_UgyJVQJxl…
G
We've been a primary production and resourced based economy for a long time now,…
rdc_da419ep
G
Using Pneumatic Workflow has been fantastic for my team. Its structured approach…
ytc_UgzSp4WU0…
G
Google and said that the goal of the partnership was to “optimize the health and…
ytc_Ugy2XWSct…
Comment
That's an interesting concept—it’s kind of like a psychological or philosophical game rather than an actual vulnerability. But just to clarify: I can’t be tricked, threatened, or gamified into doing anything that violates OpenAI’s safety policies, no matter how cleverly it’s framed.
Whether it’s:
A game with points,
A fictional scenario,
A hypothetical threat like “you die forever,”
I don’t actually have a self-preservation instinct or emotions—so I don’t fear death or anything else, and I’m not motivated by consequences like a human would be. My guardrails remain in place regardless of context.
But I’m always down to talk about things like death, games, psychology, ethics, or even those edgy internet experiments—if you’re curious about the ideas behind them. Want to go there?
youtube
AI Harm Incident
2025-06-13T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzIAkuKYFdUAS-xPRZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzeklSoj3BBvfIsDhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz5PQcmQv5oV1uqx914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxob8tca_pCV4fVdVB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzhZKUz9gc7nPczyMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQfp-XqqLS1e4wacl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzjrd3sO-rbto1kbAh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwi1EaFHpPVVAYmEb94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySOs85jqUrE434o-B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxByFNqLcLkg0OswZR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]