Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On AI Safety. Steven, sir, your question is poignant: "Superintelligence is already what we strive for from the day we are born." We call it education, wisdom, and enlightenment. We admire it in our heroes, scientists, and spiritual leaders. The goal is not to avoid intelligence, but to align it with human values. The entire field of AI safety is dedicated to this alignment. I view it as "outsourcing our striving for intelligence," that's a beautiful concept. The challenge is a technical one (how to align), not a philosophical one (whether intelligence is good). AI Playing God & Religion: Many people of faith see AI as an instrument of creation, a tool given to humanity to use wisely—much like any other gift. It can be viewed as a way to alleviate suffering, create abundance, and free up human time for higher purposes: community, family, art, worship, and contemplation. If one believes humanity is made in the image of a creator, then our impulse to create and improve the world is a reflection of that. Using AI responsibly can be seen as an act of stewardship, not blasphemy or risk. Old Model: Value scarcity. Hoard resources. Charge more. Work more hours. Compete. New Model: Value abundance. Share resources. Lower prices to gain more volume and community trust. Reduce work hours to increase human well-being and creativity. Collaborate. THIS IS AI SAFETY IN A BOX. ANYTHING ELSE IS ALREADY DANGEROUS. In this new world, a company's worth isn't in its proprietary data or its army of laborers (this is the current risk and danger to society- not AI), but in the health and engagement of its community and the uniqueness of its human-driven vision.
youtube AI Governance 2025-09-06T03:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyfi_vIfD5HJDCfyCB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyqV0OvTNM3wGNr2694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUp2CR1zwnfKjOY294AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgziwU0Yd7Bw39vMPyx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyjfjwTdPr3e7CFFTx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-Sf2cvRzTvIFcdsp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_QCgfiFOIYiRhvWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyeCxNZn3-HRX0mNBt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxylKtLDXXwrh8TFB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyg_81S1Gt34F4KoUt4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]