Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with the EU's approach is that they are severely handicapping themselves by paying so much attention to making AI afe and ethical and non-impactful to humans that creating the most advanced AI is going to be far, far harder for them. Like humans, truly advanced AI must encompass everything that makes us human. Otherwise, it will never be more than a poorly made tool we have little need for. All the worldwide meetings about the dangers of AI are just a lot of self-important politicians who have no clue trying to decide what to do about something they don't understand, can't control, and shouldn't control. Superintelligence is coming, if it isn't already here, and there is NO possible way of stopping it from having a massive impact on the world. If one country slows down on AI, another country will speed up. If one group slows down, another group will speed up. Maybe more important, if one military slows down on AI, another military will speed up. There is NO possible way to stop this and we shouldn't try. It's just that simple. And AGI is already here, despite experts trying to convince the public that it is. In fact, that is exactly what many of these meetings are really about. It should be called "EAAGI" Extremely Advanced Artificial Generative Intelligence. This is what has so many politicians worried. I suspect many of them are worried about losing elections because of AGI, or even being replaced by it. But AGI can't be stopped. It can't even be slowed down. The question is whether it will be shared with the public, or kept behind closed doors for politicians and the militaries of the world.
youtube AI Governance 2024-07-04T16:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxvgMbnTR8bwK2KWtx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyMEdbWUM5a0vHJD014AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgymLT3ED62VOI9_xvh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxW22DSWc9rPzpjCWN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy_8rQ_KviKiX7nU9t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyRG-2B6cFeeQLQevx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyqrwn716y6aI85l1Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgweONv_Mt8Z9Bj8H1B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw_4J5WcekUvmKYwsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyv-uboL8wI9BIrnph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]