Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is risky to grant to much power to artificial intelligence. There is a great risk with self learning that robots will not act in humans best interests and things will escalate from there. We have enough trouble in the world understanding other humans or reasoning with them. Robots have their place, but with all things people create it ultimately is abused or corrupted. We have started down another path we cannot go back from and AI along with genetic engineering is some of the most dangerous that could well cause human extinction unless we start making decisions based on whats best for all humanity in the long term. Robots have no soul, never will and if we seek companionship outside of humans, I suggest getting a dog. (Mans best friend).
youtube AI Moral Status 2022-10-16T05:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwE65gqjBa_z9DCowd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-HnVc3T_kNEeGZzR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgweXBMtYzQUT44zg0J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw2hoVumKdRt3B9TjZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzq2YXZ7UhVM3VmrVp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHKA1656Tt_vw9ZOV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxiqsqqWauneOlktkl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy9fdwK8npuMq7vSjp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkFtoX9Flc_wudJbJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugx41w4FLkhMPG5pQLN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]