Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He just stared at ai art and went : ' ' this is blasphemy towards both god and l…
ytr_UgyApEQ4C…
G
Reminder that some companies are trying to make this technology for semi trucks.…
ytc_UgxKormu3…
G
@Chunda8i m a diy robot and drone hobbyist. U have to give a robot emotional aw…
ytr_UgxmuwUZT…
G
Oh some of us know but the financial terrorists have bought our govt, institutio…
ytc_Ugwsd8a63…
G
There is this book called Homo Deus by Yuval Harari and one of its ideas is abou…
ytc_Ugwm_71Bu…
G
@MBarberfan4life Well most charitable explanation I can have of this is that Mar…
ytr_UgyIRKS1I…
G
YES THANK GOD YOUR THE ONLY GOOD CREATOR THAT HASENT BECOME AI Aum sum became a…
ytc_UgwdEO0lp…
G
The irony is that CEO's are by far the best fit candidates for AI replacement.…
ytc_UgyHDp1jG…
Comment
I have a burning question that I'm hoping the author of these videos can answer. How much of AI's programming is actually programmed externally versus what it "learns", at this point? Are we giving AI any more external (human influenced) programming or is it pretty much on "auto" now? If you provide me an answer, thank you very much.
youtube
AI Moral Status
2025-12-04T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyAI5dQM385nQZfNnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYzIcRBGDmQYu5FdZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxV2qsrCgXoJEV9DHB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugwpin09yr7ewz4yTpl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxeOKCXgcDEwXkl8jt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzUIdO6_tDEHrOrZbp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxx8toXEeN3CrU4_op4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFLt8fVPe221LMs5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxcT4Z9Tn-DiWYoNd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFhBzOdCwL1z20OJV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]