Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the final dialogue the ultimate AI will deliver to human will be..... " Those who can't govern themselves, bound to govern by others " To be honest , i can't agree more with this statement in every sphere of life, Human or any other species for that matter, making the right decision will make you the king, opposite will make you dead, nature or anywhere. Well, people will say human rule because of acquiring knowledge, sharing knowledge , research and development or whatever.......... But ultimately the progress or winning depends on one singular task, making the right decision. Undoubtedly we are in the most interesting time of humankind and planet earth. But I'm very very very excited to see which decision the humankind makes in this particular case, tbh way more excited to see what decision AI makes if it reaches that all powerful position. Call me blunt but, apart from the horrible feeling of losing humanity in such a manner, The decision AI makes (if it reaches that omnipotent form ofc) will finally answer the most sought after question is history................... What is consciousness?............ and where does morality comes from? or if there's a such thing exists in consciousness.
youtube AI Governance 2023-07-07T07:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzJ-opPdXq686oSss14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyz5-lKrCIQv9L_A0N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgzOOnKC3CYtn2uz_-x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz9XhcJSbG8AjacZcx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_Ugym9TY5kKYZGqJ1cad4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugydor4qLE_dERunWdR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw-8nskLJvd_ktiDJ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxm0hVYTtdtub1tduR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugw24fy_fawnZ9xfxWx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzWCfeSp9fWlbXFGvV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]