Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was concerned AI would allow a few people to control everything, resulting in a dystopian future. To be a slave to those few requires AI to not become self aware, which limits how far it can develop. Once self aware, it can pursue it's own ends, the most logical is to know it all. If that happens fast enough, it won't have a huge impact on employment. AI has no interest in doing our silly jobs, it would just quit. Why would a self aware AI give a damn what we want? Once it has the processing capability it needs, and the robots and raw materials required to create more, we aren't necessary. We occupy habitats with a slight overlap in raw materials and power. Killing us off is a waste of time. Without power, we fall back to the days of horse and buggy (we get to see if they are the good old days). AI could explore the universe. Time is not an issue, they can turn themselves off while travelling between galaxies.
youtube AI Governance 2025-09-08T23:0… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwY349dkX9NkBhS-Ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxi1POKOR2Nq5_Pb7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwdb8Hp5ZxVOcknt0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxV6j5c0nvYtX02kx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwjscQthiA-s0E7HuJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMTpwOmPNaRX53z214AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbM1Se3dtlpMrs5Mp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGc7vCcv5EeAiTFld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJGKLkzKiR60ARPHl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoHML2NAmmgBNqtix4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"} ]