Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I dont care if its matched, its souless, animations is life, made by humans who …
ytc_UgxiU3aj6…
G
I am working with 7 LLM's. I can sweare that they have personality. Yeah, is not…
ytc_UgyR7YUsM…
G
GREAT way to destroy the mankind.
AI = Less jobs overwhere, less people thinkin…
ytc_Ugweu4_Yk…
G
So many worried about the same things, AI and criminals and AI..theft..if there …
ytc_UgwjDyJyW…
G
i'm fine without youtube anyways. the algorithm has sucked, i hate their AI shit…
rdc_n7va0b1
G
The part many of these companies don't seem to understand is that every company …
ytc_UgxKFRyPn…
G
More than the throw away culture - which is definitely frustrating - we're up ag…
rdc_degg4wf
G
Art is not a natural born talent. It is something you practice and refine.all th…
ytc_UgzorvUhb…
Comment
2 questions:
1) what if superintelligence, being superintelligent, simply won’t be interested in developing humanoids and doing most of the works, both mental and physical, actually made by humans? We’re trying to predict Ai’s behaviour from a human perspective, transferring to it our personal aims and desires. It could simply be indifferent in doing a lot of ‘humans’ stuff’.. richness, power and dominance included. Which advantage will it get by pervasively replacing humans ? Probably, the most intelligent thing anything/anybody can do is doing nothing. And maybe superintelligence knows it. Who knows.
2) assuming that superintelligence will be interested in creating a large population of humanoids to replace us in every aspect of our routine.. they have to be built.. how much material, in terms of minerals, particularly rare elements.., are really available? Probably it will find a solution to overcome the limitedness of resources actually used.. by developing new exploration/exploitation tecnologies and synthetic materials.. but it could be a bottleneck for humanoids’ population growth… i don’t really know…
Of course, these are only ideas.. but I’ve noticed you didn’t have covered them. Obviously, there’s so much going on that can go wrong leading to some kind of catastrophic event that is definitely better being prepared to some of the worst possible outcomes.
Regarding the Simulation Theory.. it’s speculation. There’s no scientific evidence of it.. and talking about probabilities is misleading, as if some serious calculations have been done. Instead, there are academic papers in which rigorous mathematical computations confute the theory in terms of energy required.. in any case, supposing they’re wrong, the theory simply transfers to another level the concepts related to ‘real existence’.. how is the real word made? Are we living in a matrioska of simulations?.. should we really believe that someone is so bored in the real world to create a virtual one just for fun and entertainment? If this ‘someone’ is not a human, a different type of entity may not perceive boredom, since it’s a human feeling… again, we’re transferring our personal characteristics to someone/something else with probably too much confidence.
youtube
AI Governance
2026-02-18T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzTvtOvW3dgBLwt-JF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugya8-fKUxIFfM7dJJt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwA2vm1Lw2mvPvh9hd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxo4aXxqcbLVT96hVt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPJaptlaqvOyWawAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcPR8ZDFoMcUL08f54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwzqKlpPKwx_fA89ER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHZKyx-y7HlitgyWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwnq781aPnxaEktw654AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwtgTw__aOIRAwkm4h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]