Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@khanyatoolit98it's never gonna happen. However doctors will utilize ai more in…
ytr_UgxNkUbmM…
G
@XFO601You're literally proving the whole point people have against Al "artists…
ytr_Ugy9-ENXJ…
G
Did that guy just say he gets his news from New York times? The media hides ever…
ytc_UgyMZgIy7…
G
The concept of AI is basically humans trying to enslave an intelligent being wit…
ytc_UgxApC6MT…
G
There is the mindset/trend of treating your app code as an expendable tool. Like…
rdc_n7olj7x
G
I’m stupid, i really need to quit watching clickbait titles. Should eliminate 90…
ytc_UgyRcdRMe…
G
AI as it is now is much more like the latter. It's much more prone to reproduce …
ytr_Ugwh-zMuC…
G
We shall see, won't we? Won't that be easily proved by OpenAI during discovery …
ytr_UgzLCNHXG…
Comment
OK, so here's my take on this...
it's a computer, and it's figuring out the most LOGICAL solution, he turned OFF the programmed ethics... It's going to be like having a hypothetical conversation with a Serial Killer, who doesn't see any value in humans, and just wants to achieve it's endgoals.
Something like this is needs to be looked at psychologically as well.
What do you expect will happen?
The whole POINT of having ethics and programming values into the AI is to make it NOT see life and humans as an "Expendble" product in order to make the ledger match what the goal is.
youtube
AI Moral Status
2023-06-13T16:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx9fPrIgJCKkuzhM9d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbOueU_fPFYDwiAj94AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwUvgOHv4Mty9D8jeJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkZvDljau5ayRNAYN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHpNwkOWTy4dprt3J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwBBNba2_pp3fxi-pl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzYxHJCCyIUervTEJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw1hV3_DXcws1t3FEh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx0GNGGVCMtQwHurMJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugze8Gz8sLNxSKGlIZN4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]