Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While I don't trust AI because it is inherently evil, I also don't trust the CEOs and owners, saying that they fear their own creations. I think it is that, like any corporation, they want to create a barrier of entry for any potential competitors. That is why they go to Congress and say the industry needs to be regulated; they know that they are big enough to meet whatever requirements that may be made, but likely smaller startups would not be able; also, pretty good chance (like with finance and fossil fuels) the industry would heavily have a hand in creating the new regulations. The idea is "we are the only ones responsible enough and with enough resources to do it right"; instead, I say we call their bluff, and ban (on pain of death) any research into artificial intelligence, and see if they stick by these dire warnings, or if it was all just fear mongering to create a barrier of entry, and increase/protect their own market share. Anyone thinking that these billionaires are doing thing right thing is being fooled; they would burn the rest of the species if it made their descendants slightly richer, and have proved so in many other ways. They don't need regulation, it needs to be banned, and again, not with just a fine (which is usually a fraction of what the corporation makes in a day, like with financial crimes)- it needs to be at least lengthy prison time, if not execution, so that they know we are serious. I, for one, would applaud a CEO being beheaded for putting their own profit over the interests of humanity, which those scumbags have consistently proven is the case.
youtube AI Governance 2023-10-13T15:2… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzxbxHJAcifBnjrXFB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwdEblPYRqPehPs89t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyu-abhvc6AruOBnYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfAgck95CZ61cUWHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9cPhfbEnPWi8-5VR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwjk_yIEloaa7T0Zrl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjtbzBJglKDhZvAI14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxqhMYzeYmoF3cwOOF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxRVEF1Qxa1uE5MyZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRadCRC22lKq_Q_Xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})