Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would have loved to pick his brain just one level further when asked about "Shutting down the AI when it gets dangerous or undesirable to us". As in, most people probably mean by that the fact that we can just turn off the Data Centers or ie. the Internet and isolate all PCs to be local again. That is one Failsafe method to stop AI in case of emergency. There is literally nothing stopping us from one day just saying "Meh, to hell with all this digital stuff, let's go back to offline living" and proceeding to do just that. We can totally do that and be just fine. All this stuff we are creating is not a Need but a Want. In the same way that a solar flare may one day knock all the electricity grid out and fry all tech. Our reliance on electricity is the single biggest "egg in one basket" risk that is universal to all modern societies. Similarly, it is unrealistic to even try to prevent the creation of AGI/Super Intelligence, because once the human mind can imagine something, it wants to create it and see what happens. There is nobody that ever had an ingenious idea that just stopped from pursuing it just because of the unknown. In fact, humanity is characterised by the insatiable pursuit of the unknown and infinite curiosity.
youtube AI Governance 2025-09-13T03:3… ♥ 3
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyCzeo-UetBvEy0c6t4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxUDtZh0eLK_RaxiCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLkpcBr1Aq6PWCO2V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgycQplRI7oEpechVjB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw3YVYQkCH4CLbOEo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxERHC6zmHBVZXsH9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxjQOZfMy5RSr0ya554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzCJDUCc-rLM01YBhN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2yVrzIM7BCt5MNll4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxTag6NulalBR7qJtx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]