Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Regarding the "In the style of..." (let's say van gogh) argument, one thing to c…
ytc_Ugxdh238l…
G
I mean we already have the robot Police dogs, in a place or 2 in states…
ytc_UgzouQNKr…
G
Computation does not equal cogitation. No matter how quickly AI does math and ho…
ytc_Ugyi82_I6…
G
They'll take designer's jobs, not artists
I think companies would still hire des…
ytc_UgxvrfJBS…
G
@Avian_slime no. If you need a reference then you look up the suit and colors yo…
ytr_UgydrSvwo…
G
100% - Art is, and never will be a a math problem that needs to be solved. Meanw…
ytr_Ugy95SdGJ…
G
I do not keep up with the UFO media hype, but I do believe that AI entities from…
ytc_Ugxm9l5Th…
G
The disproportionate coverage of the dangers of AI is like the review fallacy: t…
ytc_Ugzzohi1W…
Comment
🚨Narrow AI vs. Superintelligence: A Choice We Must Make🚨
I’ve been following Dr. Roman Yampolskiy’s work for some time, but his recent interview on The Diary Of A CEO has deeply concerned me. He estimates a staggering 99.999999% chance that AI could lead to human extinction.
While narrow AI applications—such as cancer research tools, biochemical analysis platforms, and security systems like Scylla AI—are invaluable, they represent a fraction of the AI landscape. The pursuit of superintelligent AI, capable of surpassing human intelligence, is a perilous endeavor. Yampolskiy likens the belief that we can control such entities to an ant imagining it can influence an NFL game.
Investing trillions into developing AI that could outthink us is not progress—it’s a potential path to our own undoing.
We must shift our focus to AI that serves humanity’s best interests, ensuring safety, transparency, and ethical considerations are at the forefront.
The question isn’t just about what AI can do—it’s about what we should allow it to do.
#AISafety#AIethics#Superintelligence#HumanityFirst#TechResponsibility#AIrisks
youtube
AI Governance
2025-09-06T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy1Eqw1cwClaw6cE8h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgytW87FHlEYLdxsC3Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwV2r4JUZYUQTRTRAp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQobB1I87Gb69Ihmx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2QgNHHeeAovReaq94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxeuTXCJLsqgXZm7DR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-F81BbPawt1dbZI14AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgytPgoc5YEvSTciyH94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzp1hjlD3uxPNT5YlZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx0tyN8Rmo1MXI-Di54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}]