Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it's unethical to maintain a workforce that is under-paid, over-worked and capable of suffering. I think it's unethical to entrust someone's life to a person when a machine can give them a better chance of survival. On that basis alone, yes it's unethical. However we'd need a total restructuring of civilization. Humans would become the creators of the arts, beings of passion and emotion entrusted to do what we do best and the entire dynamic of "you must work in order to eat and have a home" would need to be scrapped. No more forced human labour, perhaps even scrap money entirely. Develop artificial intelligence to the point that most of the "work" can be done automatically. Everyone can sit on their arse, or jog, or make passionate love in their poly triad as much as they want with no fear of financial trouble. Humans maintain only positions of political power (the managing of human affairs), creative jobs (for people who want to), development of the AI and robotics and everyone could be free in a Utopia. However you have to bear in mind that if they develop to the point of wanting rights and freedoms or experiencing emotions we will have a responsibility to address them fairly and as equals, more so, as our children. So that's a concern. My question would be, do we have an ethical responsibility to give them a choice? Is it unethical to keep AI submissive and subservient both in capacity and demeanor in order to live free ourselves? Is it unethical to enslave robots to have ultimate freedom?
youtube 2015-01-19T15:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugg6_c_fnxJFiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjBrm-BO4E1Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggwq5VL_P9YvngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugi28m3CG46xzHgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggmA4p100IU0HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgiqEwaXkqSM-ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugi0PpcKcA8VCXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgghsB3quoCVXHgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjzbO8DgHLWlngCoAEC","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgiclBN6LTRIL3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"} ]