Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The notion that AI bad because billionaires bad is kinda insane.
All the thing…
ytc_Ugxm0EwI2…
G
idk about others but im not spending time to zoom in and disect an art piece for…
ytc_UgxbRz97x…
G
I found that with AI you get what you give.. you treat it with respect kindness …
ytc_Ugz3kxNSb…
G
I've never seen a group of people become more hostile to the idea of self better…
ytc_Ugz9mVaLe…
G
Thanks a lot Shoshana ! We certainly live in a simulation of demo- CRAZY , embe…
ytc_Ugzt2BIdF…
G
I'm currently 72 and lost my job to Voice Recognition (so AI) back in 2012. Work…
ytc_UgyHgvQKX…
G
I look forward to the day when AI replaces the frustrating human interfaces we c…
ytc_UgyU7yhhn…
G
Because of the uproar once these are implemented to a larger degree I don’t thin…
ytc_UgxOcLRVY…
Comment
(Stealing this straight off the transcript, response below)
"But what’s the real problem here? Is there a limit to how we can use data as a replacement
for personal judgment? Is human nature too complex and unpredictable to be
supplanted by an equation? Are we too ignorant of what leads to recidivism to build a useful
predictive model? Or are we just still in the infancy of this technology, and
will AI help bridge the knowledge gaps that currently ruin lives?
Is predicting human behavior just something we can’t math our way out of?
And if we could… should we?"
Obviously, I can't answer most of these questions definitively, which I'm sure is your point.
To start off with what feels most important: "should we?" probably not. That seems like an enormous breach of privacy, and we already have way too many of those in our current system.
Going back to chronological order: Human nature is DEFINITELY too complex for an algorithm to be the end-all-be-all. Unless you can get the algorithm to understand more of the nuances of a person than anyone ever has before, a feat at or above the level of the human brain understanding itself, it's not gonna be all that accurate. A psychologist would be much better than most algorithms, especially this one if a TWITTER POLL can do its job.
It seems highly unlikely that we are not too ignorant of what causes recidivism. And even if we do know what causes it, we already know the legal system doesn't care. They LIKE having people locked up. They are contractually obligated to lock a certain amount of people up!
I think as technology improves, an algorithm or an AI could certainly be helpful in removing bias and bridging our current knowledge gaps, but surely it should not be solely trusted with anything as nuanced as human lives, crimes, and motivations unless it is basically human itself.
I doubt that we can ever perfectly "math our way out of" human behavior. I think we can relatively accurately predict the actions of large groups of people, like a mix of fictional "Psychohistory" and real life sociology and statistics. But there's no way we can accurately predict human behavior unless we discover human behavior is entirely deterministic and basically ruin the idea of free will forever.
youtube
2022-07-26T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwMCeHqfV4oyC9Os254AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgynRumz9LDUjYzKBlN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzM5JJcdkIpxDlUxwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxsJ1DwRpxv52ynqS14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxzIHAyeJQZ3VKDNsx4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzc85JwZleBgtxXGox4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzeoTQ6T5eWUUw4J6h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyjxV3Z-aFZmm1ommV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKM5UneAIi1VWw4K14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw07MkGepNvc7XVjTN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]