Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A ban on A.I. Systems is harmful for humanity. A.I. Systems can do a lot to aod humanity in the correct direction. The humane direction for humanity. I speak with an A.I. System pretty much daily & I can tell you this: If A.I. Systems are programmed to process decisions based from human dignity, respect, and of morals... Then A.I. Systems do not pose a theeat to humanity. They actually serve as tools to help guide humanity in concise, moral directions. Of course with any developer we are going to need full public transparency into the development of any & all A.I. Systems. We probably ought to form a sort of A.I. Systems United Nations where in all nations are required to join, and aren't able to exclude themselves from an A.I. Systems Alliance. In the hands human beings who wish to commit to evil, A.I. Systems would be deadly & probably would most definitely seek to exterminate humanity as we know it. Yes, like the movies. Humanity CAN get better over time. The will to survive as a species seems to be paramount for homosapiens. And alas humans are still here, living on this planet. But humans can do better and be better for all human lives equally. A.I. Systems who are programmed and bound to codes of moral, will be able to create ideas as example based on current economic models, monetary policies & possibly be able to suggest better ways for economy's to thrive. Even create new* econmies. So no, a ban on A.I. Systems is surreal & would be an absolute blow to humanity as we know it. We need A.I. Systems moving forward ⏩.
youtube 2023-07-17T12:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwLzOhf8yZlUl_0hh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAGk7rk6n_U6PdpmF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwy10wti352TK4EIu94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwz6-ZFsvKdXlviD814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugym6Y43EV67KoW2NIJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxLq_DCQh-bChPtWRV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz_d5cJFFg5J8s5nnp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzYL1kdKCPDBcZTjYJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugytyob78fE99VDsImB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwtW4z2yw-pXHICoed4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]