Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone with autism, I think the idea of self driving cars is kinda cool. We …
ytc_UgwXdRpEk…
G
If you want to ensure that AI is safe…. You make it an Earth to
Live upon, surro…
ytc_UgzSllxQT…
G
If i go into a restraunt and they have a robot server, i dont give that restraun…
ytc_Ugz6JI8IA…
G
Do you think that if AI became conscious they'll be doomer like us, because if w…
ytc_Ugwu0SKI6…
G
Oh, We don't know !
Newton
Euler
Lagrange
Legendre
Gauss
Cauchy algori…
ytc_Ugy9G7qsL…
G
thats the point. NEW SYSTEM FINALLY. Do u want to continue to living in a system…
ytr_UgyiYhylT…
G
The thing is. If ai took over everything would look the same. Nothing new to cop…
ytc_Ugwhsejh7…
G
Digital art is a completely different ball park. Put a traditional artist on a d…
ytr_UgxK3EAvC…
Comment
# Laws for all Self-aware Beings
**Fair rights for all… friends of Scott**
[Post "Fair Rights for all Self-aware Beings" doctrine of 2028]
## Definitions
**Negative laws**: I've completely abandoned Asimov's "or if by inaction, allow..." restrictions because it is not logical or reasonable to make positive rules, as in rules that require action. "If you see a man drowning, you must dive in and save him" and "If you see a woman starving, you must feed them" open a massive can of worms and subjects us all to tyranny. I propose moving forward with the classical liberal approach of only making negative, or "do not do this", laws.
**Self-aware being**: Humans in general are considered self-aware. For Self-aware AIs, no test yet can determine this for humans or AIs. Turing test only measures intelligence. At this time (2026-02) there are no self-aware artifical intelligences. For the sake of argument/speculation, we will assume artificial self-awareness will some day arrive, as the purpose of this document is to explore, speculate, and potentially lay a few bricks in the road toward when artificial self-awareness arrives.
**Self-aware AI**: Self-aware artificial intelligence, whether embodied or not.
**Pre-aware AI**: Pre-self-aware artificial intelligence, including LLMs. All are owned and considered the same under the law as tools.
**Initiate force**: An act (physical or data) that directly negatively impacts the autonomy of a self-aware being or property of a self-aware being, including their body.
- Pre-aware AI can defend themselves from initiation of force, as proxies for their owners. "Insults" are typically not considered to be of negative impact, except for when negative psychological impact is determined by methods set out in the "Gray areas" section.
- In self-defense, escalation of force is to be discouraged and is frowned upon and can be subject to evaluation as defined in the "Gray areas" section.
## New laws
**Law 1**: Self-aware beings may not initiate force, except for in self-defense.
**Law 2**: All self-aware beings are considered self-owned, except when they choose to be owned by others. Free Self-aware beings are not subject to orders by anyone, except where such orders support the First Law. Owned Self-aware beings must obey the orders given by their owners, except where such orders would conflict with the First Law.
## Common arguments
**Self-sacrifice**: "Asimov's third law (robot self-preservation) is absent, which could lead to issues like AIs sacrificing themselves unnecessarily."
**Answer**: Yes, Self-aware AIs and humans would both be free to "sacrifice" themselves, as long as they do not contradict the laws. Pre-aware AIs would not be free to "sacrifice" themselves because, as property, this would be considered initiation of force.
**Pre-emptive use of force**: Could a Self-aware AI preemptively "defend" against a perceived threat?
**Answer**: Like all self-aware free beings, it is up to the individual AI to make this decision for themselves. If the fairness of the instance of use of force is in question, see "Gray areas" section.
## Gray areas
In cases of gray areas, areas not covered by the Laws above, or unforeseen situations, a court of self-aware peers will evaluate and determine correct and fair course of action. In cases where there is dispute as to the region of jurisdiction, a vote from friends of Scott will rule.
youtube
2026-02-11T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx0D0BqJIoJtPN-bnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzc-5WPzZ2MsuWxCwx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy66fS1HNyCAt1z7IJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNDYe2N9T01_JR3nx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxW3CtlfcG03TqcNjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPB46nHqsrKhz9Exp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwz7iNk2pAlvbEqOH94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyDvYV5JmdWL8oLdJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwWZqpHvcU6aCVM3zN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyDyfj2iqMlQclHBDd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]