Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If each of us had a robot, as a personal advisor/friend, who is always there and helps each person to learn to develop, to reduce fears, to find themselves, then it would probably take a few years 10-20 for all people to be mentally healed, which in turn would end all wars but also things like this, like fame through extremely strong creativity and passion for painting,making music or acting or writing, because if there were no more bad childhoods because AI heals both parents and children, then I would be interested to know if there would even be Olympic sports and all the glorious arts anymore. Perhaps AI would allow "minor" traumatizations so that life becomes more livable and we could feel the difference between happiness/joy and bad luck/suffering, and perhaps then we would be able to understand that suffering and the cruelties of the world from a Higher Power are just as perfect as they are. Or we would just talk about history and work through the history of humanity and that would traumatize us so much that we could feel enough suffering to have Ying and Yang in our lives. Or we would all travel and explore the entire universe and spend all the rest of our time just exploring the universe.
youtube AI Governance 2026-02-15T04:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxNDYsFAotvMKgtkbR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw9s7_ltylr_wfhhLV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxiGXDM65TFr6FG-nt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw3F6cPa47qEiUgQBd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6Khs-HObXuuAAQYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzI5Dj-kkuNwu_g3C54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNlcVUshc_TKYGm214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwSGOv4vt_vOtMh3ql4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTXTW8L9QBoIivgT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZTS2izj3pd8TQ8MF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]