Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I found this video deeply compelling and thought-provoking. It explored the profound question of whether artificial intelligence, if developed without wisdom, foresight, and ethical stewardship, could one day pose risks to humanity. Rather than presenting fear alone, the discussion encouraged reflection on responsibility—how the choices societies, researchers, and leaders make today will shape the trajectory of intelligent technologies tomorrow. What resonated most was the reminder that AI itself is not inherently destructive; it reflects the intentions, safeguards, and values embedded by those who design and deploy it. The video invited viewers to look beyond sensational narratives and instead consider the importance of governance, transparency, and human-centered innovation. In that sense, it was not simply a warning, but a call to cultivate thoughtful leadership and global collaboration so that advanced intelligence evolves as a force that protects, elevates, and benefits humanity rather than endangering it.
youtube AI Governance 2026-02-08T03:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyo2iw7o0S2WL8X0qh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzyyI8n_MRqdwZcHrJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5uIzAxyBDkdObF5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0MnUUSTlOUJNju5F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyySMvaAW0fZq9NPMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzk3w52tTXw_jPGXUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7MMTlzHA2yi-fglV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxTyMhxSYgCdQJPhcF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzkfuYCBa7sKmC5-M14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyP7RIgnfpA1EPFLzd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]