Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A tenet of business management is that you don't ask the people to learn how to …
ytc_UgzZFKzXS…
G
@llama6394 it should be, lol. There have been cases in which celebrities sue co…
ytr_UgyPEiVGu…
G
@hrkozl
and maybe, i have a genuine concern FOR capitalism!
if AI keeps replac…
ytr_Ugw55f2yH…
G
Using a chatbot and build an emotional relationship with it is pathetic and sad.…
ytc_Ugwx7wxX7…
G
"ai will eliminate human fault" haha no. no it will not.
Deeplearning ai is jus…
ytc_Ugy9MWv3M…
G
I am against AI art, but this "stealing" argument can't be used because we as ar…
ytr_UgzgBM3wu…
G
Programs will become way more advanced with more features, all tasks will become…
ytc_UgxZzir5R…
G
AI at it's core cannot destroy us. If we are destroyed it'll always be our finge…
ytc_Ugz1NhcN5…
Comment
By simple logic. AI is made by imperfect humans. Their character will rub off in the programming without intentionally being done. Will humans have a God given conscious, the AI does not. Will people can change AI cannot. It will justify its means of survival and dealing with others horribly. It shouldn’t be that way, but self sacrifice is something that humans have done and know AI could possibly understand that or even want to.
youtube
2024-12-16T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgymSciS9-4kOGe8DB94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxvgirs5dDdgts0iKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwH8vxiYbI7QZM52Nh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDF3aqTeiSw2_j33l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxe_EuwLuNMIEukIB94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6YTviQ9iI91qN4s54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2Fb4PoVBLBju5ufJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwn95Sno3IE0-HJI354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxG8e8KV2CqJHIHpDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzs6NGPlFBW8eGDs5p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]