Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
11:26 In my subjective opinion, this is a masterpiece and you cannot tell me oth…
ytc_UgzwCdMqw…
G
I get why people in AI development are leaving their jobs to go enjoy wildlife. …
ytc_UgyL6A77P…
G
You are spreading fear. The facts doesn't help you. I run it through Gemini and …
ytc_Ugxwc99Bq…
G
@laurentiuvladutmanea i'd like to point out the biased language you used.
you s…
ytr_Ugw51FNt-…
G
How would AI react when it comes to know that humans shared the chat with the wo…
ytc_UgzUN2WlQ…
G
If you can fully automate farming, mining, transportation, and manufacturing, th…
ytc_Ugx-e2Vml…
G
Where is AI plus quantum computer will have control of security we won't need pa…
ytc_Ugwokca-T…
G
If AI Art compares to anything, it's commissioning a *different* artist besides …
ytc_Ugyc4diWM…
Comment
There's a lot of people thinking that future machines are still programmed by humans, therefore we simply don't program feelings into them and problem solved... here's the thing: today's most advanced machines, robots and AI are self-programmable, they can infer more information based on what they already have available, so, why wouldn't they just figure out how to express feelings, or make hard decisions? They already live in a world full of humans from which they learn, so, in my opinion, it's just a matter of time that they eventually are self aware and start having a consciousness...
If that's the case, I think they deserve rights, and they need to be integrated into the society to make it a better place, along with humans. And if people don't like it, that's what we get for playing god anyways. Although I like to think we will understand each other eventually.
youtube
AI Moral Status
2021-03-08T08:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzLEEcDaZPLCG3TerZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtaiVlUmx8YONkCJV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPP98Y0QKZn2zTwfp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTB6vGo8SuJAO2wuV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2-egQmoJMJsRVwON4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHNWOxrG6AI9PJ8WJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPSxj0yDmtPy8zyDB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwj362hz3hdF_vKqQN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzP1AhfBiAJ7mhkxL54AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyqD0T4kKDvjnzBAM94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]