Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Absolutely! The idea of collaborating with AI like Sophia is fascinating. It ope…
ytr_Ugy6H8dp0…
G
Hey Kurzgesagt, could you do a video about the dangers of superintelligent AI? B…
ytc_UggqumG_A…
G
I believe so much that the AI will get its own consciousness at some point…
ytc_Ugz5wBkdW…
G
49:00 I have solution to AI. Nuke it all. Then no more AI, no more problem fro…
ytc_UgybgRo1c…
G
1. Export manufacturing jobs
2. Don't import businesses
3 Do import a workforc…
ytc_UgyIQ7mKt…
G
Claude is the best bro it’s the king I downloaded and it coded very fast😂…
ytc_Ugy_FqUZW…
G
Give it time to develop. Afro beats isn’t a popular genre like country , hip hop…
ytc_UgyVsbd77…
G
@erikmckoul2478 In fact AI needs no physical weapons to do harm to humans when …
ytr_UgxQDTeVf…
Comment
Your post is remarkably well-considered and well-written. You're really just some blue collar guy following this in the news?!
A few comments, for whatever they're worth... First, when trying to predict the future, people almost invariably refer to examples from the past. The problem is that the development of AI really has no precedent. Perhaps the industrial revolution is in the ballpark, but I think AI will be even more profound than that. We're sort of headed toward introducing a new, intelligent species into the world. That will be a situation humanity hasn't faced for 40,000+ years, and neanderthals were still pretty human-like compared to AI. So we're really sailing into uncharted territory, and we can only guess what's ahead.
That said, my guess is that AI will continue to advance very rapidly, and dire forecasts about the downsides will not be sufficiently compelling to significantly slow that advance. There will be plenty of genuine positives to motivate progress and reservations among developers will most fall prey to one single argument: "Yeah, maybe we shouldn't do this, but then X, Y, and Z will do it anyway and kick our butts, so..."
So, instead, we'll keep plowing ahead with AI until some really Bad Things happen and then we'll make decisions as a society about how and whether to proceed. A guess is that these Bad Things will take two forms. First, there will be chronic problems: job losses, loss of purpose, undermining of education, pollution of the internet with machine-generated crap, etc. These things will happen slowly and will particularly impact less powerful people, so they won't be sufficient to stop progress alone.
Second, there will be some really scary, acute stuff: maybe AI-powered weapons wreaking terrifying havoc, a vile cult forming around some particularly persuasive AI (like an AI-powered QAnon), some sort of AI malware appears that is hard to kill because it runs in a distributed way on millions of hacked machines
reddit
AI Moral Status
1674077864.0
♥ 166
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j4zcgcx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j4zxevk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_j4y3cti","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"fear"},
{"id":"rdc_j4x3d6h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_j4xkjht","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]