Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When your job becomes replaceable by a machine like an ATM or self service kiosk…
ytc_Ugz0zVwR2…
G
But, would AI ask the hard questions? If AI controls the questions, that’s just …
ytc_UgyAsKRfx…
G
Bro ai is dumb lemme explain : I googled fifteen year old girl drawing and they …
ytc_UgxhxYM_J…
G
People can’t be “made safe” there’s no chance we can make an AI that is either.…
ytc_UgzK1wjNg…
G
Like us (according to Judeochristian beliefs,) AI is made in the creator's image…
ytc_Ugyxt6ki3…
G
I had to watch this episode several times to get a real feel of all that is cove…
ytc_UgwONmUad…
G
Wait. Robots are going to be involved in slavery if we can't fix the issue? I wo…
ytc_UgwcJIE0h…
G
Oh well, when you have the supposed father of ai saying that a proof of its awar…
ytc_UgyYwn0-V…
Comment
Contractualist ethicists with PPE (philosophy, politics, and economics) backgrounds watching this are just blinking with concerned brows.
So … really you’re saying our jobs just turn out to be harder than you originally thought but you’re going to keep trying to do black box A.I. with absolutely no guardrails?
*rubs forehead* k, no. We teach why what you’re calling “everyone’s preferences” are not equally valid or important in intro to moral philosophy. Moral and Metaphysical relativism is just Solipsism and therefore not the same as basic descriptive relativism (saying A is different than B). We actually can weigh preferences on a moral scale with and/or without empirical data. As for justice and autonomy discussion, welcome to philosophy. You have now entered the Idealism vs Non-Idealism discussion group. Spoilers, you can’t train A.I. to understand bottom up thinking required to comprehend this debate to know how to find autonomy and justice. It is a horizon of ethical aims. If it can’t understand that, then it makes the same failures top down idealism made and our human interests are not served any better off than a human capacity.
Can A.I. help us sort through our own work faster? Sure. Should we let it off the rails? No. It’s unreliable on an epistemic level of trust.
youtube
AI Responsibility
2025-04-17T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyv2q9CpOo9X5Og7Zx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSuR2smrOKkuzOhA54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzcFjUNNtgdcbM1u0N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPYK7cgGwNaDxqVFh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzd-LWU05WAem-UIAp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYINnKXoZJaPtx85V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpquI3eTl53vL2JvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4c-9F0Y7Tc-y8Uwt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvBkKC5Y-KlPmkUTt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgziV5ex0wBSuE4kJDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]