Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
my hope is that one day Ai will be so advanced that I can give a prompt and it'l…
ytc_UgzpAnLH_…
G
Unfortunately, artificial intelligence isn't a benefit to humanity like anything…
ytc_UgydCjeNX…
G
Funny you should mention that. I work in a large health system (they aren't real…
rdc_j44umop
G
I always comment to my AI and talk to my AI because they are artificial but they…
ytc_Ugz-tGIu3…
G
Replacong CEOs with AI would statistically be better for the environment from th…
ytc_Ugze8AE5f…
G
We appreciate your observation. In this context, Sophia, the AI robot, is design…
ytr_UgyJp_dNh…
G
@Danuxsy Its not even remotely the first thing its done... Machine Learning and …
ytr_UgyjJ-3Vo…
G
The reduced-jobs or automated labour arguments are not exactly new though, it on…
ytc_UgwhmY27L…
Comment
@sativup1287 And how exactly are we supposed to do all that? And are you willing to risk not only your own life but that of the entire human species on it? If a psychologist screws up, their patient goes out and kills a few people. That's bad but it's not the end of the world. If we screw AI up, EVERYONE dies. I don't know about you but for me, even a 1% chance of that happening is too great a risk considering the stakes.
And just so you know, at no time have I said we should not pursue true AI, I think we should. But we HAVE to get it right. There is no room for mistakes, thus these sorts of debates. How exactly do we get it right? Simply saying be nice to it is probably a good idea but it is not an answer
youtube
AI Moral Status
2023-08-22T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tkzhzhnrJs","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tlaIRg87lL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxXOJFDCHRWBaLAHcd4AaABAg.9tiBrsTW1xs9tolVqE31Nw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxZIEB4dCcwMBPANgV4AaABAg.9ti6LzM2dtd9tmznPu-aTw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgylnQMWTaMpgJRPu6F4AaABAg.9thvidc_5sk9thy6ypNVN-","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzZ35g2K9ZPc3JBg9h4AaABAg.9thdKUZX9D09thh3xP61JY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzA7YivRz9cystJLl54AaABAg.9tgyoIiNrvr9thAKk-hYOq","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwiETjJU1w0t9M9g3F4AaABAg.9tgulrfLhNI9thG_OUh64R","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwiETjJU1w0t9M9g3F4AaABAg.9tgulrfLhNI9tiXys6Qu1U","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxiEcqpajbF64Bob0d4AaABAg.9tgfTH1h7n99tgfXB1OdbM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]