Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Love the content as always, great work. What I find interesting that when it com…
ytc_UgweztkmW…
G
As a motorcycle rider and Tesla owner. I don't understand how people can be so n…
ytc_Ugx6jLbUl…
G
Man, I don't consider AI creations as art, but I do enjoy being able to make stu…
ytc_Ugw_Mteyh…
G
Facial recognition only picks up if you have a mug shot. These Democrats are awa…
ytc_UgwGgMnNa…
G
AI is banned in Mass Effect, a computer game made by us... knows better than us.…
ytc_UgyRY1jCV…
G
@fryrsquared, would you rather be polite to your parents when they seem annoying…
ytc_UgxooOKkI…
G
A vital discussion but my predictions for AI are:
1. Government will use it for …
ytc_Ugx3Qja1D…
G
Someone please explains why I relate this to the future in “The matrix”.
Feeling…
ytc_UgzQON9mp…
Comment
One major problem with AI's in our current society is the fact that they learn everything from preexisting information. Humans are not perfect. We are prejudiced and sometimes have rather uneducated world views. It has been shown that some AI's take over these traits. This can lead to racist or harmful views these AI's aquire.
This can be a major problem that we need to be aware of in the future
youtube
AI Responsibility
2023-01-10T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyow5nwQ_tw-h3_Twt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-qKx9AEz9SO1nEVp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz4PtxJdqXPHgQM3ip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3ky_GRXdgpgZWGjV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDWAurlcyK7UeYyzp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmvOyf7KVDJvJ-fKp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuMaFCj4Naj5zTZPB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDx_PYQsie7mSmzyR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5d4-8TSdBPBhq94l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNheKzByeDE7FmUv14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]