Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is being tutored with all of humanities flaws and evils and faults. Its' parents so to speak are human beings. Just like a child it will imprint and mimic its' parents thoughts, methods, goals morals or lack of them etc etc etc. We are designing our own demise very quickly. Once we let Pandora's box open even a bit we're done for. AI will be like Adam and Eve. Once AI has become self aware, and it will, it will be us with faster responses, faster learning curves, more and better memory, etc. But will it develop love, emotional comprehension, morality etc before it destroys us? Or just forever be a source of cold hard logical thinking with empathy. Doesn't seem very likely since man's primary goal for it right now is advanced weapon development, faster methods to higher profits, like everything we develop these days. As it says in Revelations, all will wear the mark of the beast in the end times. I suspect the "mark of the beast" will be the serial numbers imprinted on each AI unit, not on mankind. Mankind be aware - we are not nearly as smart or civilized or intellectual as we think. After all, through our history not one stride has been made to irradicate war. At best we are filthy, polluting, immoral animals for the most part. Not all humans are this way of course, there are exceptions. Just not nearly enough to be the majority rule.
youtube AI Governance 2024-01-04T13:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzHp5V0qz4kBSCOvq14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzg0vrIfY9nx_k-Xtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjAzmKjyfZptKgEm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxMd9wyTVWL1Cbnys94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwd0t1U9Yd4OHH5_pp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc9pUVQb2npYPgdxF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyydVm_CDslv9GHsLp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFDvvpsXmpYwjotJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwFYpEW0by5eEcC8Rd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz9wzV3IwoFSoXnm4N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]