Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is like (nuclear bomb) once we make it or create it ..we wish we could put that sort power back in box and leave I there ??but once it's out then its out and no stopping or as I said putting back box ... Humankind I really really going regret having AI in world ... we are nearly at war with Iran cause trying create nuclear weapon just now .. but if we don't end up killing us all off in next this guess say 20 years probably sooner way the tech world is racing ahead just now ... but in that Tim we are going end up nearly at WAR CAUSE WE ARE TRYIN STOP A CRAZY REGIME FROM GETTING SOME SORT OF AI creation... instead of nuclear weapon n enriched uranium its going be enriched microchips lol .. we are going regret this AI think alot world is scared and doesn't want it already but tech world pushes on with there master plan and Sci fi craziness
youtube AI Moral Status 2025-06-09T21:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzJcqz8qeFxZcnCzht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzfCKs0ZBp-73xn32R4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzCdZ2uSsgJ9Bp2qJ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwz6X2tTBenmwU7XRZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwr86ZKocoK7A3REo14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwspWmfpDvnD0oXakx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwuQlRg8NEIIBcvI5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz17qux3Gd9ileUcaV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0dHVCFzhsDDVWmLV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHLGe2a6d0jptMka54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]