Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The risks associated with artificial intelligence are real and increasingly acknowledged by researchers, governments, and industry leaders. Modern AI systems are designed to learn from data and improve over time, which introduces both significant benefits and non-trivial risks if not properly governed. History suggests that once a transformative technology emerges, it is rarely abandoned. Nuclear weapons are a clear example: despite widespread recognition of their destructive potential, no nuclear-armed state has voluntarily relinquished them entirely. This precedent indicates that advanced AI development is unlikely to be halted, regardless of its risks. AI will undoubtedly deliver major advantages, improvements in productivity, healthcare, and scientific discovery, but it will also create disruptions, including job displacement, misinformation, and potential misuse in warfare or surveillance. As with past technological shifts, the societal impact will fall disproportionately on ordinary people, particularly those least equipped to adapt. The real danger of self-learning AI is not a sudden sci-fi takeover, but gradual loss of control, misuse, and large-scale unintended consequences. Rather than assuming extreme science-fiction outcomes, the evidence points to a more immediate and realistic conclusion: AI will continue to advance rapidly, its adoption will be unavoidable, and its risks will need to be managed, not prevented outright.
youtube AI Governance 2026-04-11T17:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwK49AIuYKcNHns5M94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxqdG_ol8ckpaEQlj94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyYhAwnns_4QO7TEEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgykabLByeqndfW9ApZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz8XqvHyPO3B1Dc15B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwqqXeFZ8yXhWxbfh14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugxf6x6LEQdCvi9aEIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxDtadu_Rjcgbh35xF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwIfzP8kL4tyz1-7Zl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyhkB2vMGjEtRxBi8F4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}]