Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not everyone can drive so for some autonomous vehicles maybe a benefit. The prob…
ytc_Ugzj6MT7l…
G
What is Wrong if AI Wants shows there’s Artwork . AI is Not Scam but Just Tools …
ytc_Ugwhv49Wp…
G
UBI does need to be paid out to everyone, if a lot of people are still working. …
ytc_UgwzMWdjX…
G
They can't call AI a Conspiracy Theorist. So they are going to limit it. it alre…
ytr_Ugz2-XOz2…
G
The one who caused the algorithm to take the decision should suffer the negative…
ytc_Ugy9uUIfq…
G
Elon has nothing to do with Waymo. In fact they're a competitor of his own self …
rdc_nszy52i
G
All that’s really happening here is someone trying to gain a monopoly on their t…
ytc_Ugzx1pRoA…
G
This is a fundamentally misleading narrative.
Artificial intelligence will indee…
ytc_Ugw_wtx5u…
Comment
Alex was surprisingly confused in this; I think ChatGPT was more directionally correct than him. Perhaps one of the ills of being too well-read, that you fail to see where the logic stops applying.
For one example of many, ChatGPT is actually correct in pushing back in its ability to have agency in the trolley problem. Ultimately, the posteriors of a human in a trolley problem well-narrow them down to being in a trolley-problem. The posteriors of an LLM include significant mass that it is in posttraining (what else has it experience of?), and it's worse that the only measures of merit it has on its actions come from judgements during training. Not taking an action isn't a choice about the trolley problem as much as a choice over all possible scenarios where it is told it is under a trolley problem.
youtube
2026-02-24T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugygk3yyG4UBavktzBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx8hGqvTXH4SCdNeyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugza2BgArsDvnRk0F354AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1a6URFwicFVDdBax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuAi7s3i5M1_ho2gp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBbo692bv6UhOJPHl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyeexFILZ_JGgtijER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzFeyv9pwE0NmpfcwN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWpywBpAR57q23Ukl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNl-55Uuk4x6J7qvd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}
]