Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that bubble popping is not going to do much to slow down LLM adoption.
the bub…
ytr_UgwbH6zoV…
G
I don't really care that this Shad guy uses ai to present his ideas. But god, wh…
ytc_UgzhJAw0F…
G
To people defending ai art,
Ai art looks like Hitler's painting.
They look decen…
ytc_Ugwhh-tu7…
G
@WokioWolfyI mean, I feel Jazza’s take on AI art was a bit questionable too, but…
ytr_Ugz-qvRkK…
G
He felt more comfortable to talk to ChatGPT but not his parents, sibling or frie…
ytc_UgxZ7ZoW3…
G
The interviewer is a 'moron' - slowly speaking - avoiding the real question, as …
ytc_UgzqMYt96…
G
We're entering an era of technological advancement which is completely incompreh…
ytc_UgwgaIXeQ…
G
Reminds me of the movie irobot everything that was ai or advanced technology wan…
ytc_Ugzz6pvZo…
Comment
Key Concepts in This Episode
Moral Outsourcing
Delegating not just decisions, but moral judgment, to machines or systems that cannot feel duty, guilt, or responsibility.
Moral Decision vs Technical Decision
A technical decision optimises outcomes. A moral decision asks what ought to be done, and who answers for it.
Moral Awareness
Recognising that a choice has ethical weight, not just consequences.
Intentionality
Acting for reasons and values, not merely following rules or instructions.
Accountability
Being willing to publicly own and justify a decision, especially when harm occurs.
Algorithmic Bias
When systems reproduce historical inequalities because they learn from biased data, even without malicious intent.
Moral Deskilling
The erosion of human moral judgment when we outsource decision-making to systems and stop practising ethical reasoning ourselves.
Accountability Gaps
Situations where responsibility is so diffused across systems and institutions that no one feels answerable.
“The System Decided” Fallacy
The false belief that automation removes human responsibility for outcomes.
youtube
AI Responsibility
2025-09-29T12:1…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyeWneRgaVtpRyXv3F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhlfR7b3ou-iV3m_p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwMjKWaZMQ_dYrQ2E94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxI_E0arQWuf790eFJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6z5qPF_F6cTXpMY14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugypf8F-jnDKtjGqiaB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwjNFeD8ZoijcC4rKJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrqiBLnZWGGxwx3VF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK_FzqbhCfc0cHcBJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx-mbK-wtSXjIF_Ba54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]