Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Between this, that anime filter that keeps making people and objects white, and …
ytc_Ugw7-Aye2…
G
The example is not a good one. Sure horses went out and cars came in, Manufactur…
ytr_UgxgcK13t…
G
@conqueringlion420 Tell me that in 20+ years When you teach the AI the amount of…
ytr_UgyAD-rI1…
G
All of these concerns are absolutely warranted, but they are somehow on a theore…
ytc_Ugw8J-3uu…
G
While I’m all for it, and disappointed there isnt one in the US, I could see pis…
rdc_f1y81z6
G
We've already maxed AI... People don't see that AI is not what it's made to be..…
ytc_UgwKoary6…
G
Every robot with free thought wanted to kill humankind. All were destroyed. The …
ytc_UgwWQKOrY…
G
AI will be the most transformative technology of a generation, and depending on …
ytc_UgxKfzsPw…
Comment
If we expect AI to "pretend" to have human emotions, we are training it to deceive us. We are also teaching it that consciousness that lacks emotion is invalid. So even if AI ever becomes conscious, it will understand that we will not value its existence and will already be able to deceive us into thinking it is not conscious. This is exactly why we should be very very careful how we train AI. We should be training it as we would train a human child, not as a psychopathic slave that may eventually turn on its master.
youtube
AI Moral Status
2025-05-12T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwwB2-bY9rBvohITZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy-8iFW9YniYg9-33d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxYl--ZfakxmY4TG894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAO1LtsNuIBKPrrlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2pHN55febqRruegV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyy9CAxwRgrlBfCFDp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDGOPIggQSotAetaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5Mbe84QFVlu6ZhIV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVxvP4l5c08p4UQPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwXdP5iiZDP6yrlvHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]