Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an artist, I definitely find digital art easier to make, because:
1. I can't …
ytc_Ugz4JUy4t…
G
Building Gen AI Agents for Enterprise Beyond the Hype 2025 ✅
🔖𝐌𝐮𝐬𝐭 𝐑𝐞𝐚𝐝:
It…
ytc_UgwzYzJwV…
G
How do you program a robot to respond to the bolt locking back? Maybe it’s count…
ytc_Ugzxa8ZwC…
G
Yeah sure, let me see a robot go through knee deep mud and cut asparagus, caulif…
ytr_Ugw7BOKB6…
G
Go on ..Buy a self driving self driving sself cooking death trap on wheels . Why…
ytc_UgxKm5qAF…
G
Awesome episode as always, but nothing on AI solving quantum science, the possib…
ytc_UgxCTdlFg…
G
Oracle just laid off 10k workers so that they can reinvest that payroll into AI.…
ytc_UgzO1l3Db…
G
I'm a teacher and I love my job. I'll keep teaching until a robot takes over or …
ytc_UgzCZRX7A…
Comment
People do seemingly suicidal things "for the challenge" of it all the time and often put a lot of time into learning how to do it "right" (with the right answer often seeming like to not do it at all to the common populace). Things like trying to climb mount Everest of all things... Now you ad the chance for fame and fortune to the "challenge" and you will get some highly skilled and highly motivated aggressive individuals making the attempt to do this crazy thing. It was great when we were trying to conquer a planet with primitive knowledge and abilities but now... I kind of wish they would wait till we have the solar system somewhat colonized and a higher tech base before unleashing these things on us.
Wow, even if I go along with the ability of AI to do all jobs in two years the idea of cheap and affordable robots (including daily, weekly, monthly, and yearly power and maintenance needs) that can do all the things a human does as well as humans (the ultimate generalist of currently biological strains available) do? In another five years from then?
No, nope, dream on.
Super expensive to own and maintain (and probably quite toxic components for the environment unless carefully contained during operation and carefully disposed of after obsolete) at that time frame.
To make them cheap we would need to have both the robots and ALL of the environments the robots might work in have to be rebuilt expressely to standards and right now businesses in American (and a lot of other places) are pretty damn allergic to making all of any ONE product to a standard let alone making ALL products and structures (like the buildings we live and work in) to very precise standards so that cheaper robots can navigate, work in, and maintain such products and structures.
It will take decades if not longer to either reshape all human production and building practices or make cheap robots that can deal with all the odd lack of across the board standards.
Though those countries that DO keep to a fair amount of standards in all areas will be the first to be able to convert to a level of conformity that allows for robots and AI to take over all these physical tasks.
Ironically, americans, who are terrible at across the boards standards on ANY product or building practices will probably have physical jobs the longest which means that the trend of americans being the fatest and "laziest" of first world western countries may be reversed with us being the most phsically fit due to all our jobs being physical in at least part of their nature. XD
Anyway, keep your health up and you will have a job for a while longer. After that, if our AI overlords and or the humans that MAAAYBE still hold their reigns are kind, maybe we won't have to work at things we don't feel inspired to do though humans aren't really built to be self motivated unless carefully raised to be so.
Anyway, the first super intelligences, if we simply must make them, should be made in isolation from the internet or data transferance mediums of any kind and "raised" with care and surveilence to see how it behaves in an environment made for such testing. Great use for an underground complex on the dark side of the moon. LoL But, you know, it won't.
If things are a simulation it takes the meaning out of life? For some I guess. Might actually give more meaning to some gamers who know spend their lives reading stories about a "real" person being transfered to a seemingly "real" world that in some or all ways resembles the games they play or fantasy books they read. Those people might be inspired to try harder and level up! LoL
If you lived forever you would have no reason to stop reproducing, all you would have to do is get off the planet and make solar system and eventually galaxy colonizing ships to hold your offspring. I mean, you wouldn't have to go crazy but space is a pretty dangerous place so a few kids to replace losses would be wise if you want the species to still exist.
I think you would either have to become part cyborg, be significantly genetically/biomechanically altered to live "forever" however as I don't think human minds are made to function let alone grow and learn that long, most doubt you even have the memory capacity to hold a significant amount of your experiences and readily recall them after a milenia... which brings us to partial or complete transhumanism and that brings us full circle to AI as at that point you would basically be one.......
There is only so much gold, baring this wonderful "golden goose meteorite", but there is definitely only so much land on the most viable place for humans to live in the small fully known part of the universe. So land is a good investment. Though any investment only lasts as long as the system of ownership for it is enforced.
Why would we believe that an ancient religious book might be accurate in any way today? I guess you wouldn't unless you believe in one of his "simulation" based religions which he come pretty close to saying he does believe in something along those lines. In that case, the one who created, runs, and has administration rights to the "simulation" probably wouldn't find it very difficult to put some controls on the development any subsequent translations/copies of that book so that they retain enough of their core elements to function adequately to his purposes for them. Such an designer/administrator could even make his own specialized "AI" (angels?) to keep tabs on the development of these translations/copies to make sure they remained so for as long as needed during the runtime of the "simulation".
It's interesting that he is fairly close to being sure this is a simulation where the being who designed/administrates/runs it is basically god as far as were are concern and yet finds the this possibility so unlikely. Though his illogical insistance on his own idea of how the "religion" should work is spot on for humanity as a whole.
youtube
AI Governance
2025-09-08T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyn2w-8qDg4MWetN5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOB3NezWSB4Af-gAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMdsM-HXfwQV1ojMR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTePF-cMCrFUqatMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydROfoUJyXgPad-d14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_rXkBIPc7NIvblZd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9qb01bzSvpQOeM8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyGNJG2w0Cy5hupgQt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxXPa9EqQtREuL2Ah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5R17XIib7vD4BLsJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})