Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
02:27 “Without pain or pleasure, there's no preference, and rights are meaningless.” A key reason for having AI will surely be to automate stuff, which will require the exercise of preferences. For the ability to make effective evidence-based decisions, AI might require some sort of perception; and to make value-based decisions (i.e. aligned to social norms), AI might need to have some sort of regard for others. There might be a case for giving some rights for something that has perception, preferences, and a “social consciousness”. But the real difficulty might be where to draw the line – not all machines will have the same intelligence/complexity. If a toaster has no rights, but a highly advanced AI does, then what about all the machines in between – where do rights begin?
youtube AI Moral Status 2020-08-01T13:4… ♥ 13
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxWl51xd66j3p-3hs54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy5ZPH15izuvOSYYYB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"sadness"}, {"id":"ytc_Ugw2v4yU19slb-zXDqt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxULQRXtDBc_CnGqNJ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxIMp0mytvW_5Xnr-V4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzT2gFdtbIGZVs4xF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyDLUjEeYcUG5ildLh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxV3EVVQP7Ej0nA0Np4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxvv_uYMZbCQIdNG7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugys1h8J4WxOiIjihHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]