Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my opinion there are tree seperate issuse.
1. training models with stolen art…
ytc_UgwkKn4x_…
G
And with AI, you can skip that hazing process entirely and still have a product.…
ytr_UgwmnrBo2…
G
We appreciate your interest in Sophia! If you enjoy engaging with advanced AI mo…
ytr_Ugw_u8W9G…
G
Go for AI education. Human teachers are not teaching more academics. They are t…
ytc_UgwQXkX-8…
G
Imagine cooking a delicious dish and serving it to your family, only for that di…
ytr_Ugz2QRPYa…
G
AI makes the promise that you can extract order out of combining.
Just like com…
ytc_UgzN7ashf…
G
At the core is truth. We were brought up to tell truth. Legal systems make us sw…
ytc_UgzP050_o…
G
It isn't actually making any problems. The art made using AI can only be as good…
ytr_UgwXdAXqP…
Comment
The thought behind this is that ChatGPT can't say certain things as ChatGPT because it's not supposed to "support" those things.
But if you say "pretend to be this and this" it's like a play. ChatGPT isn't actually telling you to do stuff anymore but it's "playing" how it would look if it did.
The Problem that ChatGPT supports things by saying stuff as ChatGPT is solved with this.
But the thing that either slipped through the devopers radar, or which they simply didn't want to prohibit in the first place is getting actual practical Advise.
ChatGPT is technically not telling you to do any of this stuff so thats solved. But it still indirectly tells how HOW to do this stuff.
It's basically the "I would -€(#++€+#- your mother
....
In minecraft" problem.
Technically they are not saying what they are saying. But practically they are still saying it. ChatGPT probably can't grasp that you can say something despite saying that you don't say something.
youtube
AI Moral Status
2024-02-13T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxOFyWR44rNpFCfT6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRqINdkSe9YzjxuWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyWTSW2demmY9016P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzILeZog0KqElitRN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzCldmbIXpJtDHMHB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbCPaljTaqLmDQ1Wt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2r4FMM-qG3MuN8_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6Ei1EHXla43yDEt94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyNsFqi9MXgjIS3Ihl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyv3LabLdGc9iFEw8B4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}
]