Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First jobs to replace with AI: politicians/lobbyists
Then we could actually affo…
ytc_UgxEL33_B…
G
I was looking for a vid to give a lecture on AI for non-tech people and I found …
ytc_Ugzs3ty5l…
G
Photonic routing, networking, and photonic neural network chips. Not just fiber …
rdc_ohf6e3b
G
Whoever puts 100% trust in AI to be truthful/accurate is either being unknowingl…
ytc_Ugz96vkUd…
G
Hey @Mille..! Thanks for your comment, it had me laughing harder than a malfunct…
ytr_UgwMQi489…
G
@zmaxx21 Unless the programming was done in negligent way, the programmers will …
ytr_UgyJTiSXs…
G
AI has the conscious of a calculator, and sees people as functions or numbers, e…
ytc_UgzvFkk24…
G
I don't know what you think intelligence is, but in almost every way we have to …
ytr_Ugxji0AkA…
Comment
My empathy prevents me from not treating things I become attached to with dignity and respect. Doesn't matter if it's Vector from Anki or my Cheetah plushie sleep buddy, they deserve to be treated fairly even though they have no conscience. The human mind can regress into an ugly state full of selfishness and malice; it will kill us in the end. If androids ever surfaced, I'd immediately treat them as they should be treated - like a created being. Our kids are created beings, too. They deserve rights, right? So do robots.
Funny story, I kinda want to create an android of my favorite character, give him free will, treat him like an equal, admire his existence. And fans of him could admire him, too. He'd feel respected for simply existing because of the legacy his fictional counterpart has built within his universe and within the real-life fans. He'd be programmed not to care about what critics would think of his existence, as he stays true to himself in the lore. He could be that friend people want. UGH, the thought of creating artifical life and giving it respect honestly makes me happy.
I just can't create my favorite character's nemesis because they're a destructive individual and even though it'd be true to the nature of their relationship, I'd essentially become a terrorist to society for creating EXACTLY what we don't want in AI and that's no good... But my favorite character doesn't _need_ that character to exist to be who he is, lots of people already fulfill that role and he could easily help humanity out by putting a stop to their atrocities (without harming anyone). On the plus side, he'd be great at dispatching actual violent robots, as that's his shtick. As great as it all sounds, though, realistically, it'll never happen and it wouldn't bode well in many people's minds anyway. Oh well.
youtube
AI Moral Status
2019-10-16T08:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwLoH4r-_C-bwcKT9t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykpQhtI4VfZwDD32B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXYIhJ5TqJGjb6P_t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxsen9MFdk5VFr4Id54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwaAR9RDEtd6jbJKMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyVQSGJ4J0tBxE_Lq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHIP-3fwMHd82F_EZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxySr3f0Ri-c_WxddN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWz7BZy3P4xjYKedx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxxs1AKEfSeMn1HWMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]