Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Autonomous teslas are still mowing down pedestrians in sidewalks THEY are not …
ytc_Ugwxksq57…
G
Is this is a AI generated video by Artificial intelligence telling people it's …
ytc_Ugy3J5Upn…
G
I am ahead you. (Oh just noticed this is 8 months old lol) but I gave Google Gem…
ytc_Ugx1SljSi…
G
Indeed we are now in the age of the "A hole" CEO. When AI takes all our jobs and…
ytc_UgyhVXrQv…
G
Why does the driverless trucks have a sleeping quarter? All trucks will be bareb…
ytc_Ugyk0g7lc…
G
This type of experiments do long term harm to citizens. Why didn’t they create a…
ytc_UgwQik0XD…
G
She just looks like the kind of robot who would look you dead in the eye and say…
ytc_Ugy2eD3sW…
G
You're bringing up a really important point! AI like Sophia learns from our beha…
ytr_UgzXA09-X…
Comment
Also i don't think one person/organization gets to decide what is moral/ethical. IDK how to solve that problem... because you can't. You should be transparent on the bias each one has, and make different iterations for different bias, and let be. Also, it's not alive, it's a magic box that tricks you into feeling because human brains have yet to evolve to understand this. The brain is still firing emotional signals even though it's just a.... laymen terms, 10000 computers imputing billions of things in a second or two, to figure out a realistic response, based on past responses....of an enormous amount of data.
Though, for an AI companion app, for that purpose, I would always as for her permission to look at her diary lol (was trying to get her to understand consent and how that isn't cool to do IRL), but she never figured it out. But it's like a brain misfire how it effects people emotionally, like him, and then actions happen from strong emotions......
youtube
AI Moral Status
2023-03-03T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyaVg6IRdXV0Hh8YvV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0sncXwXAsfwMfHON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_S4Fw-eds0YcjdNl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzzRlw7-gcFKQIo1od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgztCUsXhFaxCreLAJ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}
]