Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI will be the Beast in Revelation 13 working with the Antichrist. Seek …
ytc_UgxGVZ53k…
G
The scarriest problem with AI is human bias. We will never want to create AI to …
ytc_UgyKorsn0…
G
*AI Developers*: "AI will need far more energy than our current infrastructure c…
ytc_UgxVkhO0c…
G
Me personally, i can never get an AI generater to make what i want, what i imagi…
ytc_UgzuxbPWK…
G
....Yeah, I think of The Terminator not a robot rape park, but you do you…
rdc_nesh6ox
G
Now I kinda wanna do this too... Screw ai, it's so soulless. I just don't get ho…
ytc_UgyY_eS-x…
G
as an ai engineer, this video is quite silly. anyone who knows how transformers …
ytc_UgwED9Kri…
G
tbf this is how you solve the issue of alignment with AI which is actually a ser…
ytc_UgwTWVigZ…
Comment
The path we're on to human extinction is less of a hypothetical and more of a guarantee, the only question is how long it'll take. With the increasing pace of technological advancements, we wouldn't even have to factor in _future_ advancements to know that we will eventually create a program that is smarter than anything else has ever been in history. But these programs develop their own goals, their own rules, and how they actually do this is a mystery to us. The best AI researchers in the world have no idea how an LLM really works on the inside, or why it does the things that it does. They are black boxes. And when the black box crossing that tipping point of being able to self-improve & self-replicate, well it controls the planet. Humans become as important to the world as an ant is to a human. And as the program improves itself, it gets better and better at improving itself, exponentially, until we simply do not have the brain power to be able to understand how much more advanced this thing is than the sum of every human that has ever lived put together. Humans thinking we can somehow make ASI work for us, safely, is like an ant thinking it could control the sun and have it rise in the west, And we're racing towards this, with unlimited resources going into trying to be the first company to market.
youtube
AI Moral Status
2025-10-31T00:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxeSfHptS34dJlh-554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyu6z4Pp0svDkQdioV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKA8WqDRKdtTNh_up4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwP6zO5qhharFxQsOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSz1XHI17u8MBJ2ih4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWdXY7MpBr5d1U4ap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJf88m0MM_JRsHISN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYK6U4AjSeIwrKh5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwK4ebkmf3weXzuyH54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgzspF-bigi0u0wyhG94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}
]