Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i still don't understand why people think the AI should reach superintendence in order to wipe out humanity or the planet if you will. The atom bomb is not clever in any way on its own and can do that. The AI will not even have a chance to mature to that point as there is much higher probability that a slight failure in a system widely adopted live/autoreactive in to physical world will do unrecoverable damage to humans, planet, whatever that be. Its not intentional harm that will most likely happen but a "bug" which will destroy something on the physical side of the reality which is not recoverable as fast as the digital world is and it will simply not recover fast enough or at all. No bad mind or mid at all needed, simple mistake applied in a scale and time-span we have never seen before.
youtube 2025-10-15T17:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyrv371Hu6eOs7YGJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzfDXiU2R6dbVPqbLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5w-EsmmTQea4yaZt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyJeDIiGCTk_xP3xRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxoNhkeL6MlMsBHA814AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxayCbSK2GpVCbV0T14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHmA602z2DvJaZT8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwxqD-jpeSdARHbOrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQMAuuzU8-ZfDBFbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzscHGwG1h4ROH_2iB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]