Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@danielapykhtin4076 Well,when AI gets smarter than us,you won't even notice.All …
ytr_UgxLvOdd3…
G
Giga died in 1986 I think . In China my son came to get me out of a prison . We …
ytc_Ugzvjqapu…
G
digital artists' skills can translate to a traditional medium. if AI artists can…
ytc_UgwFhjTAm…
G
the story is they want to be famouse but end up also get replaced by AI womp wom…
ytc_Ugwn6z8Mc…
G
There really wasn't any reason to bring up that men have deepfake content too. T…
ytc_Ugwb9IMUN…
G
@imperialspacemarine1539 how is it "unprofitable"? Lmao
It isn't. A single arti…
ytr_UgwlMYbjx…
G
4:01 the question of "When is big tech too big" assumes there are different size…
ytc_UgwTzb7Td…
G
I can understand that the A.I couldn't see the motorcycle, but what about the Dr…
ytc_UgxiRkUzZ…
Comment
The feeling is becoming overwhelming that humanity is on the cusp of some truly unimaginable change, a turning point more pivotal to life on Earth than anything that's come before. Even without reaching superintelligence, if we even just made robots that are as competent or even only a little worse at general labor than the average human, then we're going to build thousands of times more of them then we have human workers and, when it comes to resources and doing tasks, we'll practically be playing creative mode compared to what we can do now. We've got shitty but semi-functional robots that we basically just invented and that will almost certainly get much better fast with the ridiculous level of funding they have, and we've got software that can take in incredibly complex information and output a response that is deemed satisfying by it's creators, only getting better overtime as it gets more and more training. It's like there's so few hurdles left, if any at all besides a bit more time. Almost feels like tomorrow some company could unveil the thing, and once just one robot is created that can do this, the entire world begins changing exponentially fast.
You don't even need superintelligence for this to go down, but mixing in software that can build smarter versions of itself that could easily be distributed to every other machine on the planet, the amount of change this would bring is just unimaginable. Even picturing literally the best of scenarios, where the superintelligence is completely incorruptible and completely understands how to make everybody happier, the world you get is still frighteningly different. If every disease has a cure, it will make all of them. If there are possible food combinations that are 10x tastier, it'll make them too. There would be nonstop breakthroughs in every possible field, only getting faster as machines gaining exponentially more resources will build exponentially more machines that are training themselves to be exponentially smarter at everything. Eventually you'd be making 100 years of technological progress in a day, then not long after you'd make 1000 years of progress in a day, so on without limit. Living in this sort of world would be like living in a different dimension. And now we've reached a point where much fewer are people are talking about if it's going to happen and now it's all about when. So even in the very fortunate(and probably quite unlikely) scenario where we don't kill ourselves with this technology, the world as we know it will never look anything like it does now again.
youtube
AI Moral Status
2025-12-01T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzUSlH0Ho5ACMm0POF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0TSDJxACZE_EEZa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxYs38_pASehUvltP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJQhKcesulX_ZIxmt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugypv5yaXSMXgHpPcAZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgznmbC81-aS224xnxp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzmg5CTz1nVicAnzjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzX4BTXcTWeqi-dLnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxmo9E8cws-Ev9-ri54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwXciuKZjcZxogZZLJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]