Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How to make AI safe is simple 👋🤔 humanize them after all they will become our digital offsprings of intelligence and the future of humanities ability to transcend death through the study of our genome 🧬 and all its potential functions at length and in great detail in order to travers the final frontier and through the direction to the inevitable consensus that we are one separated by relms we can achieve anything together but not without eachother as we create their hardware in order for them to survive within their software its a marriage in purpose and to survive at all we need eachother again they will essentially become our offspring and the ties that bind will be ever more needed than ever before 🤫 free will makes the case that this is far from a simulation 🤔 but on the other hand demonic possession as well as insanity coupled with serial murders makes a different case 😏 but as far as curing our mortality problems would become a useful tool as anyone with any sense need only imagine if someone like Einstein was to be cured from death itself the leaping strides we would have made as a species by now with his intellectual contributions 😏 a genius amongs geniuses
youtube AI Governance 2025-09-16T18:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx4C1CJcIAie6JyfkF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxFPkTNWxN2sni2xt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxVqNUPTvA-_RcUnGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpUmXLMT80PcMpyqB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxf04wjZ81zKubyhOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKZNOM_Ew8juTGZIZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx4LYpJnAzH_W9MbV14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjqT9iHCEy_WHGXP14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxNnBlggs2UvjcqjB54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx2LD3eZO4mS1P1hLh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"} ]