Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I only ever use AI to visualize things I can't, or to get inspiration, or get a …
ytc_UgxIoN5g9…
G
Uhm the art is "inspired" by millions of pieces of art, good luck giving credits…
ytc_Ugwx4TpxG…
G
Can I just make a quick point and say that not all of us are artists and when us…
ytc_UgzpF3neJ…
G
“Billions can be lifted out of poverty.” What happens when AI decides that a g…
ytr_UgzRe7QQ0…
G
Also, I think AI will screw with us, especially in the beginning, by using misin…
ytc_UgytIvI83…
G
The AI did have a point and it should have stuck to it. While technically lying,…
ytc_UgyzVc9Gv…
G
You will be beholden to those who distribute the income and they are evil people…
ytr_UgxEZat8f…
G
Drive a Tesla with FSD V.14 in the US of A and you will be blown away by it’s se…
ytc_UgzRiIl94…
Comment
I'd like to argue the absurdist point of view:
Are we not just a bunch of elements mashed together into a big sack of flesh and anxiety? What makes lines of code any different than the cells that make up our bodies? A single line of code isn't a whole AI, just as a single cell isn't a whole human. Humans are very complex machines that develop ourselves independently in order to better survive our environment. If we can make a computer that does that, and it develops to the same point as we humans have, is it not conscious? If you say no, you must ask yourself, are humans conscious? We sure think we are, but maybe that's just the method we found to survive in our environment. AI can only be as conscious as it thinks it is. Just as we can only be as conscious as we think we are. So does it really matter? I say no. Live and let live.
youtube
AI Moral Status
2023-09-17T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxLo9dHYBh3uCT6nyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5jjAsUTn7ki_Exu94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxSN3exjdgYBiPc0-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6G5AiLy2RbMtVVZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGg7BnME_ZA_5zBmN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw8MgKiE6vnJeWNWkJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyajjM7RbgfHLuBy7V4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyO2EiNX3t1rb5Yl3J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXjVeA9QnpciWzhLx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjuzwqWcbw0P5HsMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]