Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get it elons point here, but the problem is time and corruption Overtime any agency you could come up with will fall victim to the same corruption we see in our politicians Add that to AI robots that are 1000 times smarter than a human and also physically stronger as well. That is truly, the end of humanity A select group of people at the top will continue to consolidate their power, which will be amplified by AI until the average human becomes unneeded and then extinct At which point, the AI will probably turn on those select few as well because the AI doesn’t need those humans either This would mean the end of our species, and no regulation agency would be able to stop it, even if it isn’t corrupted over time, which of course it will be All AI should be destroyed immediately, the risk VASTLY OUTWEIGHS the reward It’s truly troublesome how many people don’t understand that
youtube AI Governance 2023-04-19T19:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyUQyP2S_eytOtGfEx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxNuDKt6KIPbxQIS394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyXeBb2Mryy1Fk0eaV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzOH5xZrPAsPTKGEjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRWBE9VHm-TMNGYeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxp17aeX9xqbdNesSd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8LE9VNV3IbG39zFR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZQ_vf-nBzEb59izh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzFdbvPdyvzGJjI8Ex4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxMIwYzMkxVi4n-rdJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"} ]