Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think, this problem becomes much easier if we speak not of humans, but of persons, and if we start to admit that at today’s technological level at least some of the AIs can be considered persons. As soon as we allow a person to consider their own survival or the survival of a single person dear to them more important than the survival of a single stranger, but less important than the survival of two strangers, at least this part of the equation becomes simple - five beats one. As far the effects of the AIs or the internet on the survival of people outside of this picture, they seem to consider themselves much too important. As I understand it, the use of IT technologies, be it internet or AIs, makes hospitals and other life saving institutions run more efficient, and therefore cheaper, but it doesn’t necessarily make them save more lives than they would otherwise. Under certain circumstances hospitals utilizing Internet and AIs may save more lives than hospitals not doing so, but under other conditions the opposite may be the case (a.E. in a natural disaster, with an outage of Internet and/or AI, hospitals not depending on these technologies for their functionality are likely to save more lives than hospitals depending on technologies disrupted by the natural disaster), and I have to admit that it’s beyond my statistics skills to decide if one of these both options is significantly more widespread than the other.
youtube 2026-02-24T22:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy5vI5UdLZA1FqYpCB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_niYYYQqKWEFJeIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzVpBGTWFHOWbUqpWV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgycfRkO7ku5VFEHGVN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz4DhdRjMcp5Pdwgu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEDYEZsbMf8jPzGr94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxe93cEhWIxZ46qtrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxyYPg9GUFRuEvB5r94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYypgcb2co7FJg_v54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy_miXP6vvmA7IDrud4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]