Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Cool interview, and really clears up a lot about this story. Of course my belief on the matter has always been that no matter how many gates and levers a machine has, it only has the sentince of a pile of gates. He brings up a decent issue which is that even though most educated people recognize it is just a chatbot and doesn't have sentience, you can't really say how far away it is from being sentient if you can't really define why people are sentient. Problem is that a lot of uneducated people really want to believe in the inevitablility in A.I. having human-like intelligence, and another good point he brings up is that not talking about and defining things about these systems will allow misunderstanding about what these machines are. Normally I would never accept the notion that a chatbot should be regarded as an intelligent being, but it is interesting to consider if it's entire purpose is to emulate human-like interactions based on our cultures, treating it like a person may increase it's ability to do so, even if it's not proof of actual sentience or intelligence.
youtube AI Moral Status 2022-06-26T05:2… ♥ 20
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzw9yRvTPtuwvf9q614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx2bpDKwnHGRpJ7RuN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzw0-n12cjqGBhCULd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyAwKdFDv4aVmCQpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxrln4xAdGhc5U_Ky14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]