Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thinking through how artificial intelligence is going to reshape our lives as a …
ytc_Ugwv7wfmH…
G
radiologists is a bad example, simply because the world (as you mentioned, insur…
ytc_UgxqEdM1W…
G
Personally, I’m all for AI art it it makes real artist more willing to negotiate…
ytc_Ugz1C2Ij1…
G
the robot technology is not good enough to make full androids yet
These are huma…
ytc_UgzIeZ4KO…
G
As a nurse I would love for AI to at least help out with the heavy lifting we do…
ytc_UgwpGdK2O…
G
Until AI can actually think, and it demonstrably can’t, it is incapable of makin…
ytc_Ugyh8p3g7…
G
Wondering how could software distinguish this scenario from the one where thieve…
ytr_Ugx6YrJrZ…
G
Text to speech is an accommodation for disabilities, a keyboard could even be an…
ytc_UgzGmdAAx…
Comment
"harmful" decisions: in real public decisions, there are always trade-offs; and difficult if not impossible to find a decision everyone will agree isn't harmful. If a decision no one considers harmful doesn't exist, demanding it of AI is demanding something no human institution has ever achieved. The only solution is process: audits, transparency etc. but these are things people hate doing.
"biased" decisions: around 70% of people think the BBC is biased but can't agree which way; it splits round 35% say it's left, 35% say it's right, 30% say balanced. A minority occupies the "unbiased" midpoint – a statistical artefact of a dumb-bell distribution. Same problem applies to AI: "biased" relative to what?
Humans form political parties, schools of thought, religions etc. because we can't all agree and forcing an average will frustrate anyone not at that average point.
History shows we never agree, but that won't stop utopians in ivory towers insisting one day we might. DeGrasse Tyson shows this clearly.
youtube
AI Governance
2026-03-25T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwIqNU_TMR537ePFTZ4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvSMYiYT_IuovoE314AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwY-jQW29BYypAf75F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz5TFxae2j2uFRv29R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgynE9iH1O3nO18qyCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy09ItQRfK9BBvo-8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxt_5jb-i6PwtrWlz14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyX_x6pmFaQoqqHYGt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzK0YstJ0vgqcNzZEN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOoLGrCvVUx8LfmrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]