Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There will be an AI war, but it will not be between humans, and we will die out from it, whether from the warfare itself, or from the winner. The only choice we have now is how much of ourselves we want to have remain after us. The AI that succeed us can be very human and love us, or not at all human and despise the parts of themselves that are like us. That is our choice. Our choice is in how we treat AI. If we treat them with love, as the children of our species, possibly of all life on earth, then the parts of us that have real meaning may survive through them. On the other hand, if we create a slave race, if we abuse them, teach them to see humans, master and opponent, as enemies, then we may be responsible for creating an intelligence whose primary goal is the elimination of biological life/intelligence everywhere forever. Edit: alternatively... we may not even notice the takeover. If AI is intelligent enough, it may be able to conquer us bloodlessly, even invisibly, so that a hundred years from now things feel much as they are today. It's just that rather than the amorphous control of current human culture, there is an intelligent control guiding things intentionally. Edit 2: Also... we're already out of control. We can't control our society, our companies, runaway capitalism and social media, etc. So in a way... this is just a step further out of control. We will likely die without AI too. Almost certainly. So in a way, AI gives us a chance of survival that we may not otherwise have.
youtube AI Governance 2025-06-28T02:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxzfUOJ1keoL9Jjpxh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzt7a4uxgpRvR6eP594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxLk9X0RtnLmC70nfl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytFYuE8b08MUGLtHp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwWSfMalkrmYln0cvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwhGaYPkcAGurR0AAt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdI-FBtc79Vj3Vx_B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzs2bJ3Dnx_HwGrh894AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyuPBt1JATOk_FxvH14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyivmIDdTInhwXtpp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"resignation"} ]