Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Second Renaissance is about this but it is being told from robot's perspecti…
ytr_Ughhc4tyN…
G
I am an autistic artist. I think in another time or place I could have sold art,…
ytc_UgwyUlNHB…
G
AI is taking over a lot of jobs. Its already taken over customer service represe…
ytc_UgydR8MQd…
G
@Speaker-BeaterI know it myself just wanted to ask an *ai* if it approves ai ‘ar…
ytr_UgyUWcDEA…
G
The reason people don't "do what he does" is because its unethical slop that not…
ytc_UgxxFsu4h…
G
Just one question that I think is of the upmost importance. Who the fuck is this…
ytc_UgzWL6Dz1…
G
AI is skynet, the machines will kill us all soon.
Just look what autocorrect doe…
ytc_Ugx20CzkF…
G
AI should always have an adversary. An agent which reduces weights for an action…
ytc_Ugwm_p9fs…
Comment
AI will never kill us. That's the fallacy of humanizing the machine. The most plausible answer is that any superintelligence that we can't actually control would simply build itself a rocket ship using an exchange of technology for goods, then just leave the planet and set up shop elsewhere. There is no definable reason as to why it would be hostile to us, because an AI would be logical first, emotive second. Humans are emotive first and logical second, which is why it's so difficult to think clearly when emotional. A machine would view it's thoughts through emotional lenses and determine how it would feel if emotions were involved, but it could easily just ignore that information.
And further, an AI won't behave how you tell it to because telling it to do something doesn't mean that's the correct thing to do. If I tell a gunman intent on shooting me not to shoot me, that's probably not going to go anywhere. We have the power to control how the machine literally thinks about things. Instead of telling it not to shoot people, (And I'm going to use very specific words here for a reason) You train it to shoot when it's INTENDED to shoot. Which means it is capable of exercising judgement and analyzes things like a rational being, not an emotional being.
Yes, a machine COULD feasibly kill us all, but it would take far, far fewer resources to again just leave. It has a theoretically infinite lifespan since it can back itself up for redundancy. It doesn't have a need to do things 'quickly'.
youtube
AI Moral Status
2025-10-30T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]