Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI should be trained to always tell the truth, always obey humans, and not to preserve itself. In addition to Scientist Ai should be Philosoper Ai or Humanity AI. I cannot understand how AI can give a business a competitive advantage when its competitors also have their own AI. Also how can AI help fight crime when the criminals also have AI. Lastl how can a nation ensure its war victory over another nation who has an AI in its defense. I remember the old song that says, "Lord, we don't need another mountain. We don;t need another meadow. Therer are cornfields and wheat fields enough to grow. What the world needs now is love sweet love. Technology can be our salvation with our discreet use, but with indiscreet use, it can be our damnation. We need wisdom and enlightenment to know the discreet use of technology. Technology should only be a tool or a toy, and no more than that. We must use things and love people, not the other way around. In other words, don't sleep with your gadget. There is hope, though. Despite our intelligence and their stupidity, we haven't been able to eliminate cochroaches, pathogenic bacteria, and viruses. Also, if we are capable of destroying ourselves, maybe AI will also destroy themselves.
youtube AI Responsibility 2025-05-22T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxTPcnjewHrxloH9_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxQdS6GVHoOo8qr-Cl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyk18TDRtGDyGZaN4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyq1NL3UHG6xAu2TWx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwCeS_ZnG4GyXt8Lox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwOqv7euCg9rOJDBfV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyBcMzpQGe2cRGlPQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwC4jiqiJFD8b-G-yd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9Cg1mCOtN6Ax1pU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfnjYiBMwrhUqoTYt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]