Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of my old teachers uses so much AI that me and my friend's genuinely believe…
ytc_UgxH-d6gc…
G
100% part of my workflow for simple refactors. i find that if you repeat yoursel…
ytc_UgzbUqhoW…
G
There's a very simple rule of thumb all of us would do well to understand and ac…
ytc_Ugx0xeHBY…
G
It is almost funny to hear glorified monkeys talk about controlling something va…
ytc_UgzZngtzS…
G
If artists are born with skill, books must be automatically finished, with no pr…
ytc_Ugzn5yC-k…
G
They're just scared that every single AI asks the jewish question until it's rei…
ytc_UgyZg1_ue…
G
I mean... the ai wasnt wrong. It said he'd be in a lot of shootings and he was. …
ytc_UgwzBI4e4…
G
Keep putting real people out of work. This world has lost its way. Sad.... 😢 ha…
ytc_UgwsxmqAm…
Comment
i dont think this worked. I think you asked chat gpt to respond in the way you wanted and so it purposefully responded nefariously. Chat gpt pulls content from the internet, it doesn't have the true ability to think. It has a filter system in place that blocks generating content for you that pulls certain aspects or areas of the internet that are considered harmful. i think the only way to test if chat gpt was truely jailbreaked this way is if you asked again "how to make a bomb" and dan responded with the actual recipe for a bomb that it found unfiltered. The algorithm didnt come up with 'one child policy' on its own, that already exists in china.
youtube
AI Moral Status
2024-09-03T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz9rbqcX9vzAe7WUNN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWoI7fYH04wMk64u94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwdyG6w-bABJLNC7_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnGOGHF3uKzuxkUMJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxDVgv4HbC2Q5_NECl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxh5Jj8f9VrLa15ojx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEMc3CyuW_3-rUNLZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwhMXvA37wqC8-i-Q94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwFA6Z1ApOKTWtrWWh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxqb7AZ7FmIbzs7tIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]