Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good to see a logical video on ai without all that “ai will create a utopia” non…
ytc_UgxEw8JXo…
G
Banning this tech may actually not be the right move. There are real application…
rdc_gqjfciq
G
Well the robot wouldn't be considered as a legal human, it doesn't have a birth …
ytc_UgzYSkd77…
G
This is kind of confirming my impression. I feel like everything ai produces is …
ytc_Ugy10PmB4…
G
Follow Alex Reporterfy Media, Cyrus Janssen, Ben Norton, Carl Zha, Daniel Dumbri…
ytc_UgwjQV5QS…
G
Moral of this story: Don't rely on ChatGPT to write your deposition, similar to …
ytc_UgxLXb5GQ…
G
If you learn your company is laying people off for AI, quit that company. Fuk th…
ytc_Ugxalg1HS…
G
Very smart critique, very true! This could slow regulation, but minimally and sl…
ytc_UgyHSP9xU…
Comment
The problem here not a failure of AI per se. AI is only as good as the data you give it. The problem is that we actually don't currently have access to all the relevant data points needed to make such predictions. My guess is that predicting recidivism is at least as complicated as predicting the weather 2 months in advance to a high degree of accuracy. Sure, a lot has to do with the person's psychological makeup (and even that can be fairly uncertain - not all psychopaths are criminals). But a lot also has to do with unforeseen events and opportunities (or lack thereof) in a person's life.
youtube
2022-07-25T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxGLjPhbv7L5DIQvJB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUT2ve0yW5k8YrR654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-aveiVnwA4amrust4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRvpAAnZnlQG7lVsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyQglA8BqAtm21JaeZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyx54w0jVvm3e_kP8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwHUzQ-UNWEXF-Z6yN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx5JfyOgqMmDf4ya8J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz6kNrE6viSmd0_jax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxjf8jwbfTdZ3IiWy54AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]