Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@yesand5536 Well, here's one study done in the medical field.
April 28, 2023
Co…
ytr_UgxJaKJat…
G
@GabCleon i meant AI art ong i should have clarified 😭 Idfk why i put digital a…
ytr_Ugy3LCHfk…
G
it was definitely him and not ai lmfao otherwise he wouldn’t have gone that far …
ytc_UgxAaTXFe…
G
I'd like to see one of these bullsh!t.driverless rigs drive around NYC for a day…
ytc_UgzDACOG-…
G
if i was on that court i would ask him how many people he told the ai to drawl, …
ytc_UgyoySQ7H…
G
The pro-AI acceleration crew has no case. I've read all of the prominent ones. …
ytr_Ugx_TYMnu…
G
DISCLAIMER KOOBTOGRAPHER is not AI! He is a bot coded by a streamer KOOBY_ go su…
ytc_UgwsIMawU…
G
@playerwil the AI didnt shoot anyone, the police did. your issue is with the pol…
ytr_UgwdE6tF8…
Comment
The only important question to ask about AI (AGI, ASI) is can it replace humans in their role as the most evolved "concsious" system we know of. In other words, is there some quality biological systems possess, that can't be replaced by silicon or non biological means. If the answer to this question is "No", then I see no reason to conclude that humans are needed, or by extension, biological systems.
The idea of "agency" is somewhat overplayed here. It's not at all clear to me that humans have agency in any real sense of the word. Even in practical terms, this doesn't seem right. Does an individual ant or bee have agency? These animals do build things but only as a collective society, and no individual really has much say in the matter.
Your agency, such as it is, is very limited in the same way. It depends on many variables out of your control. People seeking power wish to gain more agency than others but even that marginal relative gain doesn't mean you aren't subject to complexity that diminishes any agency you think you might have.
The only true agent is onw that has almost total control and free choice in what action then take. What I'm describing here is a "god" of sorts when you think aboput it. A God has true agency we could say.
If on the scale of agency, an AI can surpass humanity, then I think we're toast. It's that simple. This doesn't mean we stop existing nessesarily, because ants and bees exist, even if we have more agency than them (debatable) But they exist only because they don't come into "terminal conflict" with us. Mosquitoes may, on the other hand, find out they do in the future, and so might actually be terminated.
People will tlak abot "what's the meaning of life" or "our purpose". I tend to think our purpose is to "organise & robustly store structured information over time & space". Biology has done this super well until now (DNA etc) We increase entropy in our environment (the system) in exchange for the ability to lower it in these local complex & organised structures.
Can AI do that task more effeciently than humans is the question. If so, we have achieved our purpose by creating it. Our purpose then is to obsolete ourselves and give rise to a better evolved, more robust system. 4 billion years of evolution has been incrementally going through the same process and there's no reason I'd conjecture, that we are anything but another stepping stone in this evelutionary story of obsolecence.
The only question then becomes, is there enough "overhead" in terms of "free energy" (so energy not needed by what superceeds us) to let us survive like we do the ants & bees.
I do not know the answer to the questions I posed here but if the answer is "no" to the first & "yes" to the second, then the third becomes very important.
If the answer to the thrid is "yes", we don't have to worry too much I'd guess. If it's "No", again, we really are toast.
If the answer is "Yes" though, what does even that look like in real terms for us as individual humans.
The more logical answer I've come up with is we end up in "heaven". Where heaven is a simulation we are "uploaded to". I would suggest that our then encoded versions of ourselves would be far more robust and could be powered by orbiting the sun in some sort of enegy collecting "pod" as part of a dyson swarm. There we would remain for a huge amount of time (10s of millions of years perhaps) just "living" out our bespoke simulations of "life" but choosing to do so with more agency, like a god.
This seems to be the ultimate best outcome humans can hope for.
youtube
Viral AI Reaction
2025-06-24T10:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwc4HoJMSODY1N06FJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6IQpxs8JFFv2e-054AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPwibterjEEHW56OF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0QCZI8secR93NP454AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRjW7dYSwkX9FR_lt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzqde8rPI0Ukc5bBDp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzba2UITd_YC7pUawrR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjaKMod5RjHOeOxTF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygFKMkqOzoWbFNh1l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5B6paNpZDkUctdDt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]