Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They AI hype is nothing more been executives banding around trending words and n…
ytc_UgzJTA4py…
G
On the topic of Photography and Generative Ai, I personally had the experience o…
ytc_UgwnzM4Xn…
G
It’s interesting that the professor uses examples of convolutional neural networ…
ytc_UgxRQ2T65…
G
The things that need to be done require political will and in a democracy, polit…
rdc_fapec40
G
AI is a tool. Not good precise of professional(at least at this stage), but it m…
ytc_Ugzj57cw1…
G
Can You Teach Me Your Art style I Suck At Art 😭 but I don't use ai cuz that's la…
ytc_UgwpJyfSE…
G
I think ai should be governed the same way we govern the united States some kind…
ytc_UgwxCvW0p…
G
Putting everything in the hands of AI is insane. What happens when say there is…
ytc_UgwjxFZh7…
Comment
1:30 Cute, but that's not intelligence, from his description he is adding values based on presumptions of most popular answers, "If you were a religious officiant in Alabama, what religion would you be." as if it is a game of family feud & guessing the top 1st answer most would provide. There could also be a hard coded response for the cult SW corporate product "religion" that could be set as a random response when it is not sure or a error trigger. To be fair without looking at the finer details most of the way he describes the responses is no more than If/Else statements & lists of associations. Then, when the question is asking about the religious officiant from Israel and the creator's presumption & bias is to say that no matter the specific religious choice you make there has inherent bias, however the code may not be written in such a way, if the code is playing a variation of family feud then it should go off most popular sorting, given it has a source of population statistics. The inherent bias the creator is concerned about is a construction of real world beliefs which he is trying to manifest through the code. To write code that says don't provide an answer to this one question because that would piss people off in the real world, regardless if any answer is true or false. Don't say anything because it's too hot of a topic. Honestly, the same could be said for any location. It's writing code to pick generalized most popular answers believing that is the correct path. In essence, if your floating down a river with strong current with a forked stream coming up, your telling the code to stick to the stronger current AND telling the code to not even recognize the upcoming stream due to real life biases.
If you really want to impress me with AI, set the code to give a definitive answer about population statistics from a given region which allows the user to make their own determination and keeps liability minimized. Human understanding relies on object orientation which is a natural bias, to pick up a rock & label it as a rock then tell others it's now labelled as a rock. Is that bias to all similar chemical structures & minerals of a object we call rocks? Does the rock want to be called Jeff? These are human limitations of understanding. I don't believe AI will never be more intelligent than humanity because we are not that intelligent to even be able to claim what defines intelligence. From my own life experience the closest I can figure intelligence is sourced from the level of empathy, understanding, and perspective a person has. Basically how much can you recognize the given world differently than you do now and beyond the moment. AI can not do that.
youtube
AI Moral Status
2022-07-09T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzdLQo0GmGSn4R89L54AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzB4Ay7YHVYTaR5HbN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2KOw38YpvSqJ_SZB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwPArF0KbMTBN7q_Rt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwIB5JjMcSRib4e8F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]