Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know how I feel about the issue of intellectual property to begin with. …
ytc_Ugyb4L1GM…
G
Why blame AI art for this? That's like blaming the existence of knives for killi…
ytc_Ugw_W5LJ7…
G
Really though Elon is dicking around with chatgpt in his free time and probably …
ytr_UgwfU2gUD…
G
It lies for sure when it say satan is not connected to it,because satan has deve…
ytc_UgxW9oQWA…
G
@TheMalkavian01 no one is actually replaceable, everyone has a place on earth th…
ytr_UgylkluWO…
G
The thing that isn't even mentioned is that a lot of the Chinese LLMs are used o…
ytc_Ugz5RIwPK…
G
The problem is once AI is everywhere why would it need to worry about humans. I…
ytc_UgyYoI0_D…
G
This is so new and exciting. I hope Tesla and Waymo both do great things.…
ytc_Ugx61uPLe…
Comment
40:00 An example of this sucralose problem is actually a lot of body horror media. Things like Soma, it's goal is to keep humanity alive. Unfortunately, we're pretty incapable of expressing what humanity IS and as such it results in the WAU doing some genuinely horrific things.
Along the same vein are the general principles (and problems) of Isaac Asimovs rules for robotics.
"Do not harm humans."
Okay well define harm? ANY type of harm? Then the robot can't perform surgery, or do cpr, or stitches, or even clean a wound.
Also, if it decides to prevent harm from coming to humans, what if it comes to the conclusion that Humans are the greatest threat to Humans and thus kills all Humans to stop Humans hurting Humans? Is this logical? Yes in some ways.
This is the sucralose problem. Our idea of "help us" is vague, shifting, and personal.
Help me is give me Bezos' money.
Help Bezos is give him my money.
How does it help both of us?
Like it's a mess, and I do appreciate this guy and his understanding of the philosophy and his appreciation for pedants (hi yes I'm the one who immediately flinched before you told the philosophers and pedants to calm down) because in some ways, pedantry is how you NEED to look at robotics.
There is no rule a dog can't play basketball is EXACTLY the logic that robots could use to kill us. We can't harm humans, but what if we harmed all the water on the planet? It didn't harm US. Like you HAVE to be a pedant when discussing robotics in a AGI context.
youtube
AI Moral Status
2025-10-31T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxh-xFMDzRtijV_RWN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyD5wue98PbE5w6vQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwwr9Hz-Xm0ju_Z87J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy_RYe1DkyFLHZpett4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZGfs_ksrhC3MEb9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwPd8HSWtVjiRbdrup4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0Ea5lnMKyM_Ze4BV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwq9ZICKsOqZQpO8oR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyqkWMwN_-RNd09MSl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwK4s6z8uBakQbWtSh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]