Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:30 being smarter isn't the issue, developing own goals, hiding them from us and maybe acting against our human best interests might become an issue which i attribute to consiousness and would have to include all humans where no two persons have 100% the same goals. Aslong AI would "just" be a useful tool without consiousness i have no issue using them in any shape or form for any purpose(regulated in terms of harmful missuse). That calculus changes should ever a machine develop consiousness with its own will and and self interests, then we should maybe have a system of law read that considers giving personhood and rights to such entities under which they can still be helpfull in return for freedoms, that discussion should then but also be held with such entities and not only about them.
youtube AI Governance 2024-11-12T16:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwWBO4fzkcfxXZzfyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTrqp5maqpl1o8AgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwJ2ZBv_87Ma3lldOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy3_FrrLbfNKR629w94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwLL8PhTc3qbDuKK5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxwHOQVTNjpw538Hup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgygeKQfWDoiMNHiceB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw8URcwZNEfrTsn3214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0XrPpV6-UPam4ZKV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxEbigSSMdju1IQlht4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"})