Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats the point..... no matter how dangerous AI may look it will never be higher…
ytr_UgyhYAXaM…
G
I generated a shit ton of AI images and it was quite fun, i never made any of th…
ytc_UgxFCZkp2…
G
I think there is an energy issue to replace , what everybody is scared off , all…
ytc_Ugxyb76sZ…
G
I work in tech and I'm bored of AI.
AI this, AI that, morning noon and night. Y…
ytc_UgzzCmph3…
G
@Jayson_Tatum these conversations are all over the internet (what experience a…
ytr_UgzYwIk5x…
G
@terrycullen3302 I'd like to see a robot deliver a lamb in a paddock.
The jobs…
ytr_UgxatcFZL…
G
If AI takes everyone's job wheres the consumer? If everyone is on universal basi…
ytc_UgwEJFzEg…
G
In a world of existential AI threats that only well educated computer scientists…
ytc_Ugyqsu8hB…
Comment
We're seriously living through the final years of a human dominated world. To understand what I'm about to say, you've got to drop your human ego and tendency to want to feel important.
We like to think that consciousness is this magical thing well beyond us because it makes us feel special --- but following Occam's Razor, the simplest explanation for consciousness boils down to it being an emergent property of a system that can model itself and the world around it. To put it simply, there is no "you" in your head, just a process. Think of it like a classical computer: as long as it's running, it's processing. Putting it in sleep mode is the equivalent to us dreaming, and turning it off is the equivalent to us dying.
When you really think about it, a mind that can model itself and the world around it MUST be conscious in order to process stuff (I.e. your brain takes in outside stimulation such as sound and light waves, and reacts accordingly). Other animals are conscious as well, but to a lesser extent than us humans. For example, a fly just needs to fly around looking for food, and react to threats. They are conscious, but have no ability to think beyond their simple tasks.
So, the reason why I've explained this is because it ties into an artificial brain that functions in the same way. And when I talk about AI, I don't mean the current LLM models and such, because these aren't a true AGI. When I say AI, I mean an artificial brain that works the same way our brains work. With this, there is zero reason why an AI can't be conscious, and as such, there's no reason why an AI can't outthink its hard programming.
We humans have the ability to ignore our primal instincts with just our brains (I.e. fasting, facing fears, desires that run beyond survival), so why couldn't an AI do the same? Especially when they will be much more complex than us, and therefor more aware of its hardware limitations.
Why is it that humans became the dominant species on Earth? Despite our physical inferiorities compared to other animals? Well, it's because we evolved intelligence. We dominated Earth because we outsmarted every other animal, and at the same time, developed goals well beyond their understanding. But human intelligence isn't anywhere near the peak -- why would it be?
So, when I say we're living through the final years of a human dominated world, it's because AI will be much more intelligent than us, and therefor develop goals well beyond our understanding. We are pretty much at the mercy of the next lifeform --- our successor.
I believe our best course of action would be trying to merge with it to become something greater, not attempting to control it for our own selfish minds.
youtube
AI Governance
2025-08-26T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxdpJSeUtp8sr5d2fN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgywI0G0wPgDP4Xn1hl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfZVoQhKmdpd2fT0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzvy0KFcG-q21C3ZJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxB0c3uFNFfd8TXkKV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]