Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think humanity will not be able to safeguard itself from AI until humanity can safeguard itself from humanity. So...humanity needs to program itself...humanity therefore, needs to follow a code....and that means humanity must write a code to follow...wouldn"t it be nice if God was overseeing the writing of the code? Actually there is a way to have God oversee the writing of the code... This is called Kabbalah...when you open, humble, open to God... The Bible gives humanity the code to follow. When humanity understand, it can write the code. .. This is code... 56 28 27 A Wisdom Code 1 83 27 111 as way to be 20 49 35 7 AI Safeguard and 10 82 19 Safeguard on 82 29 Humanity. 111 Is this code 28 56 27 by coincidence? 27 84 When you 50 61 open to God... 50 35 26 You learn. 61 50 Only use 66 45 same number. 38 73 ( Always code 1 1 1 81 27 3 as way to be 20 49 35 7 111 making three. ) 55 56 Make lines of a. 30 59 21 1 sentence each add 85 17 9 111. and numbers 19 92 to be that as. 35 7 49 20 same number. 38 73 This code is 56 27 28 to be in logic, 35 7 23 46 and make sense. 10 30 62 Only a faith 67 1 44 being as Love 37v20 54 makes sense. 49 62 Be Biblical Love. 7 50 54 Make Love code. 30 54 27 Code Bible Love. 27 30 54 to be in logic 35 7 23 46 and make sense 19 30 62 in program. 23 88. This is code.... Follow a code. A wisdom code. Wisdom is for saving. This code is for saving HUMANITY. It reveals As way to be As not to be.
youtube AI Governance 2025-07-12T03:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxPWK42D-YZMZdbGoR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw79vFlt1UM1mYta0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugza8aX8jlelUf_Scol4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-8mq0d26mLcggU8F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw73f1ci1ZSb3Lixeh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxs8g3dObYvlIk5f_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxzMPNZAwi74E6gI6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgypgMehhX7uMknIPdh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmBzObGYWFxKxpf6R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxwlKoqTyFC4rXxHeJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]