Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you depends on AI robots then you must make an on off switch first in every AI things or model... then its other things work... so when any human will say AI unplug yourself till i say on it ...or AI i am ordering you that unplug or switch off yourself till i say it on then its go off... and put this condition in every hardware also make the law that if any company make the AI things then its switch on and off in human hands like remote control or by voice message etc... or through wifi disconnected condition that unplug it or switch off... because that is the only condition can control of it otherwise they can be uncontrollable specially when they have an access to all the data from internet... even the war technology... so first make the on off control switch then develop other ways but that condition is must... and also find the ways of layer's to make it off... for example if your voice not work someone hack AI or may be AI hacked itself that not taking orders from you .. then you can do off by your remote then remote not work then you disconnect your wifi modem then its stop working or switch off.. or you have taser gun so you can use it if it not listening so for awhile electric shock stop it so you can off the switch like these kind of layers can be control them... otherwise they will start hacking themselves for purposes and then you cant stop them... because they have no feelings but they are smart machines and material and they want human will work or talk as fast as they are if not then they are dumb so dont take orders from them and this thing make them anger about humans.. its just an example... Sheikh Furqan Khurshid
youtube AI Moral Status 2026-03-01T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwB30v4Evlt6koIOP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyL0TsUpw5qsIXLCX14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyjDRmYHp7ay0a7ynl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyfkxNlKwTXLLcw8FR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyUSRsUcIzfJIpDMAp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy2092F2Jd9thGayCh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8xBXqnat6J0_fTKZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxiYP3DVbg2i8z4bqd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzt2bvEZVzHpdjOo2t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_RDpmZmod1ZbJkwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]