Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@SenatorSanders  I'm glad that there are people paying attention to this. In addition to the ability to wipe out the working class, it could wipe out our entire civilization as there exists a desire to produce AI that wouldn't just equal but exceed human intelligence (sometimes called ASI or Artificial Superintelligence). The argument that we need to do this in order to beat China with AI is a false-narrative since China has made moves to put guardrails on AI, so we need to do so as well. Historically we have worked out agreements to regulate technology that could endanger human civilization before, we did so with nuclear weapons including restrictions on testing them in the open air (Partial Test-Ban Treaty and Comprehensive Test Ban Treaties) and reduced the size of our nuclear arsenal (Our current nuclear arsenal is around 3750 warheads, it used to be 31255 back in 1967; the Russian Federation has around 5500-5600 as of this writing, but when it was the USSR, it actually had a peak stockpile of approximately 45000 warheads–this was around 1986). The Cold War brought us uncomfortably close to an all-out nuclear exchange on more than one occasion (The Cuban Missile Crisis was clearly one solid example, but there were also number of false alarms–and these occurred on both sides–that could have brought this about as well). On the matter of AI: There needs to be some form of means of AI governance that inclues both national and international bodies, as well as means of ensuring that the people who sit on these bodies aren't just politicians and well-connected individuals but competent but ordinary citizenry to ensure the actions of these bodies actually serve humanity as a whole. Thank you Senator
youtube AI Jobs 2025-10-10T00:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgypJ4PogNfYaDAghrl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxypJcnYQwhQKx8_dR4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz1-6J5vAmqJ44f5N14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy5m14qSgi5QCjvFz54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjJ9ArjsXCkqCouKB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgynhnANqQrtH6DilFl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBUD1OJLSaad0uTQ14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyG0fg61kWDQY1RFhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy9nZsMdZJ1qidXvZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxP9tHUWoQJsf6y5HZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]