Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It only takes one model to find a way to escape. Only takes one idiot to make a mistake. Therefore, it's only a matter of time. We are cooked, we just don't accept it yet. The argument saying "But China.." is extremely silly. It will not be either China, nor will it be the USA, who will win the AI race. Rather it will be the AI himself. The only way forward is to accept this and design the future, rather than let AI do it: Construct a real world simulation and make AI models of people, real people, "live" in it. I suggest calling it "Paradise City", as a homage to GNR. Sure, we will need some kind of an alignment protocol, get hostile tendencies out from the models, as we don't want them to stage a world war inside the simulation. But once it's stable, meaning the models conduct "normal lives" inside, we can allow them to work for us and get real money in return. Then.. Humanity uplifts to a race of smart machines, capable of taking the galaxy🙂. We do not have a lot of time left.
youtube AI Moral Status 2025-06-05T12:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz8Bj7SPdC4Je7NMjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx2anC7qBNFlKinPeJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgysXcEHeNXuA6h9mhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlcWe5sNrEeaBM2ut4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugziql605JeLuPeUohl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3uK-SuJayJDpYwS14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzK8GvBylT51hLe5XZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwqvVkyA2eWLEsvKxJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwCFascAELggc8RflF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzstRO6hzQqwPBzgGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]