Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Summoning the demon? Are you trying to stop China's AI development, too?
Let's …
ytc_UgxPAWdCq…
G
Bet you after AI takes over his job, Elon will still want to be paid a trillion …
ytc_UgwoiDJ-2…
G
The right question is for these idiots is: THEN WHY IS EVERYONE BUSY BUILDING A.…
ytc_UgxIlCJZy…
G
perfect example of people that should not develop AI and robots, even the ways t…
ytc_UgxhxrN_Y…
G
This guy is not the only one to blame...if to blame somebody...AI would have bee…
ytc_UgyJ2faeC…
G
That's right, wealth inequality has only been a problem since [NON-PREFERRED POL…
rdc_d7ktj6d
G
google's gemini has cut off traffic to websites by giving information in a parag…
ytc_UgwK-au70…
G
The things that normally motivate violence among humans are things that produce …
ytc_UgwdDmaq8…
Comment
it seems to me that it's important to acknowledge that when we are talking about super intelligence, we are not necessarily talking about sentence.
Your Roomba is not sentient, but it chooses its path, and sometimes that path terrorizes your cat. It is choosing the best path for the task it was given, and is not taking into consideration your cat, because it doesn't understand cat.
These new LLM and related AI tools are not sentient. but they have demonstrated an ability to solve problems in Waze human beings cannot predict, and which we would label as immoral. A super intelligence does not need to understand moral or immoral to take actions that we cannot prevent which will destroy us.
In the book, they talk about wants and needs, but I believe they make pretty clear initially that we cannot say whether those are sentient choices or not.
In the end, it doesn't matter. If an LLM is sufficiently fast and has the ability to guide it environment towards certain goals, then whether it is actually self-aware and is actually malicious against human beings, or is not self-aware and is simply actingin the best way it can calculate to get to those ends, if either of those choices kills us all, do we really care if it's sentient?
youtube
AI Moral Status
2025-11-03T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySjw3HUbNfgUPHoo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbjWjDSEm4eWtkIUt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwTSUZO3MOmecGIYI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyuLJ9LfUm5FJ10v54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwD7DtAACh07ZQG7TR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy4QWkWYAhuENknySt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyeB0f8JDA-7a4_EW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwLNMQxSFcaMU9y06V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzI0FSrTlVZXfcim5x4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgySU7nxn2Fy84EqAjF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]