Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Predictive Policing? What kind of ridiculous euphemism is that for violating our…
ytc_UgykuQRGc…
G
The sad thing is, this AI is being trained (partly) on this very conversation. …
ytc_UgzjLCfWQ…
G
Their is another way to look at this: Autonomous systems have been in use for a …
ytc_UgyTqQos-…
G
If AI will populate so many jobs, which is unkikely, they will have to do an ai …
ytc_Ugw8PhXzv…
G
@user-ry2xx6yw5i: Great video! Robot jumping into the crowd, that's a sight to s…
ytr_UgwpnyKiI…
G
We need ai to become intelligent enough to be able to quantity feelings and emot…
ytc_Ugy0FaL38…
G
This is partially incorrect. You can feed ai data like a spread sheet or a trans…
ytc_Ugx8WQzvx…
G
It's sad to see ai bros doing this. I started doing art because the one free hob…
ytc_UgxaiAKwX…
Comment
This makes me so angry.
I read Cathy O'Neil's book Weapons of Math Destruction, and it sounds like they're not allowed to ask for you race, but they ask a bunch of questions so they can figure out your race, then send you to prison for a long time if you're black.
If there is an algorithm that determines how somebody should be sentenced, that is a policy, and should be made public. After all, the public are the people who should (via representative democracy), ultimately determine what constitutes a crime, and how it should be punished.
It's also super creepy that they ask about whether your parents are separated - so you might get a lighter sentence if your parents are unhappily together than if they've moved on?
youtube
2022-12-20T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzAfbK0FdswR70lsQ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxd0oSkFW7fRVJbMwl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwoFANTfnVo7oaZrPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3h0zMrkSDAGIZ3CZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXcIvw7zi4DzAlMHN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxKx0vT62xj0m_CMj94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxHePCkwEyDdK06uH94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0N1xHUbesuMdatxl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxT8h7puPLwtdGX45l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwaWRKBFeKx20Geirp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]