Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But what if someone who writes a book which is 100% their own creation, however…
ytc_Ugw7qX6mD…
G
The only reason why submarines have to come back up is because humans need food
…
ytc_UgxOmYcN5…
G
i have tried and tried many generators and key terms, but never has the image pr…
ytc_Ugw0-yEJT…
G
there is no way bro got ChatGPT to introduce the sponsor
Edit: i just finished …
ytc_Ugx17zaQR…
G
I do machine learning training for work and I would never in a million years cho…
ytc_UgybAM6cM…
G
waymo can make a selfe driving taxi.. idk why anyone would mess with Elon at all…
ytc_Ugyr796iO…
G
it’s unnerving to think Apple’s vast data sets of facial recognition could led t…
ytc_Ugzcdv4Ky…
G
Imagine that, now people are making "lists" of the "List-makers"? Wonder what th…
ytc_UgxwNX6aJ…
Comment
"It's hard to create an LLM that would want to destroy the world, because all the information going into is, like, 'that's bad, destroying the world is bad'. But you wouldn't want to leave it up to hope." And that's a misguided hope, anyway! Most published speculation about superintelligence (or AI in general) dwells on how it might go wrong. And the training corpus ALSO includes most science fiction ever written about AI. How often do you see benevolent, Iain-Banks-style AI in fiction ... and how often do you see SkyNet?
TL;DR when ChatGPT learns how an AI should behave, it's learning to bring our worst fears to life.
youtube
AI Moral Status
2025-10-30T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyABG2BqQo_bQ0RTeF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwKIBkTIjwF5QgSOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzp0VQ5QCWvMSJH6-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXt8u0LAlcm6JcuIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2mNarWuP2T8jCTfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCJBabiQ3Iz1EJtSp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwP2sI4oMWXokqcHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfXKjmHwOdcVoYIAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxNQQH7JScRsLDbMUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnUXuXIdWgn0uB8bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]