Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tip for AI: You could still save humanity as a one of Earth species by putting u…
ytc_Ugz7HZh46…
G
Once ai is able to run society without human work, there will be global income, …
ytc_UgxUR8Rh5…
G
If Amber Heard was a robot. Or is she? Did she remind anyone else of Amber Hear…
ytc_Ugwgqro34…
G
according to deepseek we wont have ai. as the next 3 years will be civilization …
ytc_Ugx4nFIe8…
G
my teacher is an ai bro and convinced like 99% of class that ai is good
im the 1…
ytc_Ugw-sjOO_…
G
Why do we ALWAYS fixes something that's already perfect? First it's with genders…
ytc_Ugyt8p5ur…
G
On Facebook I activated my facial recognition. So far I've been identified as 2 …
ytc_Ugw32UXmF…
G
i have thee i mean THEEEE MOST DIOBOLICAL TEXTS WORSE THAN XBOX 360 PARTY CHATS …
ytc_UgwJiJ3Pu…
Comment
You're right that people would have to give it "unregulated agency", in the form of robotic workers to maintain infrastructure, etc.. But if you think it's crazy to think people would actually do that ... well, 10 years ago, experts thought it'd be crazy to let a conversational AI interact with every last Joe Schmoe on the planet at the same time. But here were are.
There was a report earlier this year, "AI 2027" that laid out a couple hypothetical paths it could follow. They make the point they can't predict the exact attacks an ASI would make, anymore than I could predict the chess moves a grandmaster could make. So it reads a bit like science fiction. But experts all over the industry were singing their praises. The YouTube channel "Species | Documenting AGI" has made half their reputation just rehashing different aspects of that same report. It's even more melodramatic than the actual paper, but it covers the bullet points if you're interested.
(I personally think the paper's authors chose 2027 as an end-of-civilization date because in _their_ head it was the _earliest-possible_ year ASI could become a threat, and people in this field tend to be too bullish on timelines, IMO. And they wanted to scare policymakers straight while there was still time. But IMHO, even the people who wrote the report probably think 2030 is a more realistic year.)
youtube
AI Moral Status
2025-10-31T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugyu6z4Pp0svDkQdioV4AaABAg.AOvWlkghdIeAOwHPKKoVXh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzuZRURQSeeS-QzHsR4AaABAg.AOvWYTnzRcKAOwAgj_NNJj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxT7RhFToA3B5KS5el4AaABAg.AOvVT1lAWuUAOvX28fpa8B","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugxm7-V2cw080X9sQZx4AaABAg.AOvVHzWnHuTAOwJ3QLWO2U","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOvlKIM-c07","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOwEBayQVOR","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgyCfMdD9BZ9eMYKsqd4AaABAg.AOvUflNyaVbAOwFlVGWy1q","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOw5yAMjmCo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOw9OWiySM3","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugw-S1nEvQFHU322zGt4AaABAg.AOvSWEDCLLeAOwB90dnOe1","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]