Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the problem I have with this is this: robots (even AI) require programming. Lots of decisions are just opinions, not 'right' or 'wrong.' so who's opinions are going to be programmed into these robots, from which they will make decisions? Is a robot doctor going to make the 'right decision' in, say, continuing treatment after a patient refuses care?
youtube 2013-06-22T16:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx1owY8KMQQpaUVoL94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxCHEe9gdHj20ZduC54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKrONnTALkMSiXPy94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyndkOOK5UcwRDL7P14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_Ugxf1ljBqJhDZTII-rN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVWbMuIk6D-APhN2t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwIMriQa68OOqDcqr14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzbBcYCXu1UFiCis_x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzZyZyocjtkajZUTrt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyGJ-lF5FZQ7m-da54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"} ]