Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think besides using AI, things or tasks created bu human brains should be more appreciated as they are designed , created and developed by human efforts using their ptrcious time , effort and experiences.these people should be paid more in terms of the anive factors than any AI generated products or tasks as the can just be created by just pushing buttons on the computers within a second. Whereas a human brain has infinite potential in thinking if it is calm snd wise with accountabilities , such as knowing what should be done and what should not be done ethically. This is in fact a great contrast as compared to any robotic tasks carried by AI which seems to have very little considerations or almost zero accountability on the tasks that AI is carrying out . For instance, would the court put someone who had mistakenly killed someone using AI to court by giving him a long prison sentence , or just punish him lighlltly on the dxcuse that the death is caused by the robots directly , pushung all responisibilitues to the robots and all sorts of AI ptogramming . In the end , the most sigunificant part is there MUST BE clear ethical boundaries set for every career to protect human from being the slaves and manipulated beungs beings by the robots and AI professionals who focused too much on effiency euth the sacrifices of human nature , dignity , values and respects.
youtube 2025-07-03T14:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzSTzdhujGMkd7WY8p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy4ZUvYlUCLnoxWNDN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6RmBboRp672liR7Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxBWrwwT6tswxRy7014AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzXwvhMuvI2h_pfJWh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgypjsgxSPxp-DYZsex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXTno0BXGhzN0D2KV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyVXSBcEZafS4tb-DR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwVnNctP-65Q0V496F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwTkmj6mGwRX00aYV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]