Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Three examples from our recent past tell us everything we need to know anout our future in an AI dependent world: 1. Nuclear energy was harnessed to provide cheap and plentiful electricity to the World. Instead, the inventors decided to use it to kill and exterminate fellow humans. And to this day, nuclear weapons are an existential threat to Humanity 2. The extraction and use of fossil fuels was meant to provide transportation and petrochemicals to the World. While we did manage to do that to some extent (barring all the petro-dollar fuelled wars of course), we have ended up in a scenario where those same fossil fuels threaten the survival of Humanity through Climate Change 3. In biotech, innovations like Crispr and gain of function research were meant to aid in drug development and disease alleviation. But the same gain of function research was the reason why we had a global pandemic in the form of Covid-19, that took out almost 15 Million people globally. That is 0.2% of the planet's population. If the proponents of unfettered AI deployment and penetration really care about the future of humanity, the only way forward is to Open Source all AI research and put very strict regulatory controls on the direction that research is headed. Otherwise, there is no difference between the fossil fuel corporations and the likes of Mitchell and LeCunn
youtube AI Governance 2023-07-05T19:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugxs32GfFAuVJqXtsER4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyP32EFA3Y5ktq3NCR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxpm-nkEA4Jlj1DWUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3O-lecstqLqiaL5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5LT0M-B6vvyirP9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywSt7QVnzDLLJwsnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzR72iHwgV5RJqN_6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx1ltpClDN2cUZQHmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwB-pGM8x1G4L7K-sB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugymf1lykKqLfaW0dVN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]