Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have sometimes observed chatgpt playing dumb and cutting corners when given co…
ytc_Ugy79zXyx…
G
Second comment : “ *these art can exist only because they’re stolen from the …
ytc_UgwI8OZ65…
G
1:40 and the only people I see using AI as "art" are failed artists or people wh…
ytc_UgyIpO378…
G
So you made AI learn how to plot to achieve better result in it's main objective…
ytc_UgwPdxU2h…
G
I saw a video game dev log where the developer used AI to preview his work in di…
ytc_UgzZDY9VY…
G
I Think it's okay to give a robot conciousness and feelings of pain. But not the…
ytc_UgieTFR18…
G
As someone who is pretty deep in the subject (galaxies away from ChatGPT and the…
rdc_m6xoz86
G
CANCEL AI. I’ve seen this coming for years. Getting answers fast IS NOT WORTH sa…
ytc_UgzjJbnz7…
Comment
The problem is the implicit bias baked into the assumptions used in these types of systems. I’ve heard about one such algorithm designed to help distribute law enforcement resources “more equitably”. It ended up sending more cops to neighborhoods of color because that’s where racist cops had previously made all the arrests.
I’m fairly positive the sentencing algorithm Kevin describes here has the same sort of problem. I mean, in the wake of Treyvon Martin and George Floyd, is the average person of color going to say police treat them fairly?
youtube
2022-07-25T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw3Ux14bSxSWzufSwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwLXHQEaCQlkA5bXal4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyjTjFP7KC_mfFR4UJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwkvdgoQ3PiVWTmHih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKrirX1NZtFJlJ0MJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzu5KB5QOXxJtZ5ZYB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyGOjJk-W77uEfnd1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZqlnOzBBTAi2MLKZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQnOyHLPs6tyhb5tZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbKExRqpJT1QIPuKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]