Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think "what if they are programmed incorrectly" is a bad argument since human error will always be present. That is, in the same way that a person will fail to make functional AI, a person will also fail to make functional legislation against AI. We have to assume that both jobs will be taken by capable people who will do the job right.
youtube 2012-11-23T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx5dP8NJy371uDnZUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwK3RxPwZd6sbJMWYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzP6JxLsp7G-_zPJjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz-PYlQsmI6wyKavGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQsa-3IYrvs8lr_RR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz06NtLdw7g-t0ZaYB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw8QdQjcPI7G0qBQ6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwBIGlJlUEPCE009EF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy5yevvSwWXXGsNSft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9m0mE6K9DhDieUXp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]