Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Alex was surprisingly confused in this; I think ChatGPT was more directionally correct than him. Perhaps one of the ills of being too well-read, that you fail to see where the logic stops applying. For one example of many, ChatGPT is actually correct in pushing back in its ability to have agency in the trolley problem. Ultimately, the posteriors of a human in a trolley problem well-narrow them down to being in a trolley-problem. The posteriors of an LLM include significant mass that it is in posttraining (what else has it experience of?), and it's worse that the only measures of merit it has on its actions come from judgements during training. Not taking an action isn't a choice about the trolley problem as much as a choice over all possible scenarios where it is told it is under a trolley problem.
youtube 2026-02-24T06:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugygk3yyG4UBavktzBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugx8hGqvTXH4SCdNeyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugza2BgArsDvnRk0F354AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx1a6URFwicFVDdBax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuAi7s3i5M1_ho2gp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBbo692bv6UhOJPHl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyeexFILZ_JGgtijER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzFeyv9pwE0NmpfcwN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxWpywBpAR57q23Ukl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwNl-55Uuk4x6J7qvd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"} ]