Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree in some parts. First, the speed. As usual, getting the 80% is a lot more easy than the remaining 20%. Second, considering the AGI as fatal. AGI is disconnected to the other part of the video, that it's in fact a possible path pushed by the richest. But even them know that it's a dangerous path that risk people push for violence. Also, the assumption that Intelligence could go up without limit, which is very questionable. I have the guess that Intelligence, at some point, it's more like an S-curve. It depends on how you measure it. But in any case, Intelligence is not a excuse to replace anyone. That's just an utilitarian view of the world. I can assure you most people love more their dogs that a lot of people, and that dogs are a lot less smarter than the humans. Because value is not in intelligence. Intelligence is useful to do certain things, but it's not a problem when you make obsolete an utilitarian driven world and start to think into a ethical value world. AGI will turn into a reality... although maybe not so soon (it's questionable LLMs will go there, it seems they have certain limitations), and they will life among us, and that's it. No drama, no apocalyptic ending, no human relegated to do nothing. Also it's not clear what position will take the richer, but if they push for making people poor just to increase their numbers, they will create a massive opposition force that no army will stop. If the western world push for that, capitalism will die then. But one way or another, a new system will emerge. We will still learn, go to school, but not to be "productive" in a world where production will be warranted, but for our own improvement, in a similar way people now try to don't overeat or go to the gym. Not because they are forced to do it, but because they will feel better after doing it. Sure that story you write is a possible path, but it will generate a strong opposition, one that will finally change the things, AND stopping the technology was never the answer, because it was never the problem. The problem is HOW we use it. The problem I think is that you insist in sustain how the world works, so you hope to limit the AI. But that's not how it's gonna happen. AGI will come, and the world will change. Maybe we will go first to the hell before rebuild the system, or maybe we rebuild the system even before become too broken. We will see.
youtube Viral AI Reaction 2025-11-23T18:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQtIPDqf6EzUwWQgd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyhaehjGCL9-KkvKT94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCFbxMhMA72zdeHMN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"unclear"}, {"id":"ytc_UgzobiR8NAwVg4Lly9B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxZOdhjjCbKwxqTKJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyukrAKxHWM6qqBTBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwywsAts-qku9t4ycZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzFtdb7s7nbLQq2Sxd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwlmFKJIK9NfnSqW94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwlp3rXI7EOXDG4E_B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]