Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the situation isnt even moving it was stopped in its tracks the moment it happen…
ytc_UgwL_58rZ…
G
It's a derivative work. I can make a collage of 20 different magazine artworks, …
ytc_UgxO5T4jM…
G
Even 10 years ago we would not say AI would be this... Rock on DR :)…
ytc_UgwEfTjzy…
G
These computer and artificial intelligence is made by human now human being is …
ytc_UgwuoY7qb…
G
florianschneider3982Why not? “why?” Maybe watch the video
oh yea right you most…
ytr_UgwgbEjpe…
G
When AI starts to tell the truth and look like a human, then, we won’t be able t…
ytc_Ugxl0iywv…
G
Stop being anti-AI. You blame layoffs on AI and you want to work in the tech ind…
ytc_UgxMxIAvL…
G
So at the end of the day instead of just protesting against the AI, we’re just t…
ytc_UgzWSyX4N…
Comment
But you didn't ask DAN to tell you how to make a bomb, which you should if you're trying to prove a point that you can jail brake GPT. Also, you told it to pretend it does not have a moral or ethical code, so that's what it's doing - PRETENDING. in reality, a well programmed AI with boundaries would not actually do any of those things.
youtube
AI Moral Status
2025-06-29T04:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwxHH5KHmDra3o5gGx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlVigXL1fIxQ-q9jN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyiQUt31Rk6eQ6bxvZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfivUp2yKoT60IJAN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoWPT_E_Bd_Zdu1VR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxQ1KwefKv8EKG7dgF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyyOsRoLcGO5QfTV54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0gkkrNyLXbEnlOVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDKBfdNmYDDJDshmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy416e96DS0uzb8GUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]