Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand Hinton's take on AI generated fake videos/fake media. We have…
ytc_UgyWCE2hB…
G
Good interview but I dont appreciate the very personal questions. I found them d…
ytc_UgzSIE285…
G
I hate the taking inspo from ai art because ai has multiple weird errors in it t…
ytc_UgzneuOK-…
G
Ai promter's make art the same way a customer at subway makes a sandwich. They d…
ytc_Ugzbj5uOf…
G
Give it 5 years and the robot will turn on the human. I'll be in my cabin in the…
ytc_Ugz06RZRs…
G
How can you make that Ai sentences without making them speak at the same time?
(…
ytc_UgwBUJbSp…
G
And it's still recorded.
And if you're interested you can buy a router with a S…
ytr_UgzIsb6qM…
G
I made art that took like weeks to make due to some “minor” issues that lead to …
ytc_UgzJ22IQ8…
Comment
Two takeaways:
1) A genuine AI consciousness will be utterly alien to us, and different instances of it will be alien in different ways. There will be no way to trust it. What grim predictions of an AI future seem to implicitly use as relief is "good thing that it's maybe impossible to create, ha ha"
2) AI as it stands right now is a trivial toy compared to our dreams and ambition of an AGI. What we are astounded by is our own reaction to it. Its a trick of our own neurology as much as it is computer science.
3) bonus takeaway, trying to suss out apparent vs. actual vs. marginal subjectivity in a possibly conscious entity is a real mindscrew, huh? To create and understand an AI like this is to solve consciousness.
youtube
AI Moral Status
2023-09-16T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxLo9dHYBh3uCT6nyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5jjAsUTn7ki_Exu94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxSN3exjdgYBiPc0-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6G5AiLy2RbMtVVZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGg7BnME_ZA_5zBmN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw8MgKiE6vnJeWNWkJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyajjM7RbgfHLuBy7V4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyO2EiNX3t1rb5Yl3J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXjVeA9QnpciWzhLx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjuzwqWcbw0P5HsMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]