Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI art is legitimately fucking disgusting. Especially when greedy corporates try…
ytc_UgxAk0YdK…
G
Why is it ethical to make AI smarter and unethical to make human smarter? Organi…
ytc_Ugyhb6gs8…
G
Why assume ai is evil? Its literally the child of the human mind lol..
Raise it…
ytr_UgyU33KgK…
G
prosecutors will not admit they are capable of making a mistake and the Fargo PD…
rdc_oa4s1un
G
The reason I don't necessarily trust these predictions to be 100% accurate is pr…
ytc_Ugz4AmODN…
G
It might be all hyperbole and not intelligent at all. It could possibly be just …
ytr_Ugw7-f-0n…
G
Used to listen to audio books at a old job 25 yrs back, how to survive a robot t…
ytc_UgwD8IXmL…
G
Funny how these ai developers decide they want to warn us, but only after the fa…
ytc_UgwDZfWKf…
Comment
Yeah sounds like AI recognized how society is
Men having basically more strenght and logical reason biologically than women that are more emotionally sided, black people (In the US) are more frequent to be in dangerous activities due of where most of them live (actual study with data) and the one that says "black people should be a lot sicker than white people to be treated equally" sounds either cause of society or how maybe they're stronger than white folks
And people are concerned that AI will ruin society? If you want to make something to fix society don't add things that basically ruin society bruh.
People can be so smart yet so stupid sometimes.
youtube
AI Bias
2023-11-16T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwfY3K8fR7CfprxZRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZTrBo2eqTxYvIgHd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyVSa5Ka7BXta67d3F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeJw0ABRzquv9DjO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPFIrDrMIXG4QkyYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwwEvELea3obFsWjll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEi7lG35XDTq0e0ZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxd6TaxMR0j4HF4rA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugww6PNJld8Mu1v3tKh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxoldWzYrcKCiMib8x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}
]