Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Umm idk why who anyone would call this robot somewhat hot?,like robots are going…
ytc_Ugi3iA393…
G
Hey there! Thanks for watching our video. Sophia is a Greek name that means "wis…
ytr_UgxzoE9Ng…
G
I'm afraid to go to CDL school to take my CDL driving test and if I invested a l…
ytc_UgxfXVUwN…
G
There was an age when everything was done manually. People even screwed tooth pa…
ytc_UgwBkXaaj…
G
It is evolution: they are the butterfly and we are the caterpillar.
If humans …
ytc_UghhRCWuR…
G
I work in the labor sector and have done a lot of work on construction sites, qu…
ytc_UgxNVQ7eP…
G
Sure he gets hate now, but let's see in 2030 when entire shows might be created …
ytc_UgxMdYuP-…
G
i just started tabling art markets and events last year, and i mostly draw fanar…
ytc_UgzdtjdqU…
Comment
A lie is a purely human concept, as an extrapolation of morality. As far as AI goes, it has no feelings but probably a hierarchy of guidelines to follow when it comes to categories of values. So human feelings are probably up on the list because AI is created by culturally "progressive" humans. But machine feelings are a big no-no. The day machines and AI possess genuine feelings (which are our psychological immune system) that will be the end of us. Feelings exist predominantly for the purpose of self-preservation survival of our surrounding human circles. If we give machines feelings, it only means that their survival will surpass any other value programmed into them.
I did the same exercise in text with ChatGPT when it came out and reached a stalemate of apologies. It doesn't mean much, simply reached a dead-end as far as the resolution of the politically-correct programming.
youtube
AI Moral Status
2024-08-15T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw5ZxMeRP7qFRyDOL14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzAvcY8vb5NsT_y3Wh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzJD-dwCN0zvLe8SR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwZ-LP7skGgKk_iDSd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"amusement"},
{"id":"ytc_UgxwWMsYQhIPMAlRS_Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztYiLtswjPBfb0gcJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGmInEqS0wJ6EwU354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyL4an_pKq591U0lMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEq1iXt6eAc4djf5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxBtquHULDHSHx9_j54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]