Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm on the complete opposite (i'm hyperphant), and my drawing is... Okay ? I mea…
ytr_UgzR19agA…
G
I don't think we should try to make AI "be like us", because that will make it w…
ytc_UgyII1oIQ…
G
The open topic is how AI is how will be dealt with by the courts which deal in l…
ytc_UgzKUp6Hy…
G
@Val_ley_girleven if they make the ai its still not art. Making ai like that is…
ytr_UgyhH4r8O…
G
The ultimate future of AI and robotics is universal high income. And people will…
ytc_Ugz8cXUZR…
G
When Elon Musk will soon be churning out 10m highly capable AI robots a year, pl…
ytc_UgxkaA1nL…
G
AI companies are not generating most tax revenues. Big Tech and Big Business has…
ytc_UgxYiypKA…
G
Okay chatgpt, are you willing to put your life at line if you actually believe t…
ytc_Ugwwl8RJa…
Comment
The AI that predicted he would be shot uses scores or numbers to determine whether or not you would be involved in a certain crime, the higher the score the more likely you are. It uses your friends and neighbors to know when to raise or lower the score, his friend had been involved in a shooting and his score went from something like 14 and went up to more than 200, he was later shot because the police came to his house and his neighbors thought he was a snitch.
youtube
AI Bias
2023-02-16T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy-WsoVrbE1_O01siB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYyBHPYZ-DK5I8OJR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwDeJ5v0r_9SDBwIoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1NSj0cUF68FeChl94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPL76nejAQobhlTA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxRD6oe3QP-OdAiKCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxjaaqmq5CFlqz0tPR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzM5qiFRJPAa6p58Dx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZb9Pl2Dv0KzgVcf94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzjcDz9AG5D8rULB5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]