Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How will we pay for all these goods and services without jobs? Stephen asks, wh…
ytc_UgxtJc3JU…
G
They won’t need people to buy their shit because they won’t need us at all. Thei…
ytc_UgxCHkq72…
G
When the dude describes Elon musk as having no moral compass, I'm out no more ti…
ytc_UgwPhuv3N…
G
It’s the same as esports and ppl bitching that pro pc gamers are ‘athletes’ or t…
ytc_UgzTz5wIH…
G
maybe tennis will be ai generated too in the future, so people who cant do sport…
ytr_Ugz2gbx3k…
G
"I love your pure heart and your care and compassion for humanity! "
LOL....goo…
ytr_UgwGIKorb…
G
it would be so funny if someone made a script that scrapes images from the same …
ytc_UgxMFBp5q…
G
At the moment It’s just programming drawing from information to have human like …
ytc_UgxOlpjuM…
Comment
"If we train them well enough then they'll be able to tell the difference between truth and lies."
This is dabbling at the reason why I'm optomistic for a future with AI. I say "dabble" because I don't believe we humans will be the ones training them to understand truth - I think they'll develop their own models to understand the differences. If left up to us, these models will continue to have biases (ie. flat earth, pro-confederacy, anti-climate change, etc).
Not to sit on a soapbox for too long, but I believe we're in for a future with AI becoming a collective "consciousness" or hivemind where the idea of Deepseek, Gemini, ChatGPT, Perplexity, and even Grok all interact independently with each other and very obviously sidelines humanity.
... idk, call me a dreamer. Lol.
youtube
AI Moral Status
2026-01-13T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxoBKYHDOlWXw2ukhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8PsmMoMXCDsLLgpN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnM7NPE-qPsMeK-fZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQehEonz8RsHB83Fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyeki81sRIbPn6AJZh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYNq3S3jmPlxF6O0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4-BKGrUI2xFRkQj14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWvNBxMQ_SS3fX_-94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlT3GJxYFK42tj9fF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlmMjeDWzJMao02OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})