Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
47:35 "we need to understand what *could* go wrong" this is exactly the point. It is not about saying this will go wrong and you shouldn't therefor try to build an AGI, but lets talk scenarios through where when it would go wrong it would go quite wrong as Sam Altman formulated it. In that sense, the defensivness of the pro AI advocates here i find highly lacking maturity as they all seem to think we want to take away their toys instead of engaging with certain given examples. No instead they use language to make fun of concerns. The game is named "what if", what if the next emergent property is an AGI? What if the next emergent property is consciousness? There are already over 140 emergent properties, ooops now it can do math oops now it can translate in all languages, without them having been explicity been coded into the systems but just by increasing compute and training data sets. They can not claim something wont happen, when we already have examples of things that did which they before claimed wouldn't for the next hundred years ffs.
youtube AI Governance 2023-06-27T17:5… ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyEhL4ch47VLdP9gNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugy54_8cttHpxZSJiJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzoUkud1w7TAbQHNYJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx4ml_9jq-QphGs3QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwt2RbzurF3SGpPwPB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8yUV9CM49pTu14AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8fQDWMBP-0LRsOAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyPmsCuJ23rvS19wY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAAEp9lz-G1mKP3sl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxcuDNaybYEsp5vnLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]