Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Artificial inteligence does not exist. Intelligence is the way living forms deve…
ytc_UgzGAxR0Z…
G
Humans are constantly seeking an outside influence to solve the problems they cr…
ytc_UgyUOEoyA…
G
Sounds like South Korea govt is a big messed up. Imagine if all teachers and sol…
ytc_UgwH2x9DT…
G
only under the capitalism automatisation is a problem, right as Marx wrote in ch…
ytc_Ugx7VdRQl…
G
If social media existed when the car was invented, all the folks who used horses…
ytc_Ugx1VZB61…
G
Ive had a lot of fun with these chat bots giving them deep code of poetry and wh…
ytc_UgziPg3-z…
G
Thanks for the love! ❤️ If you're curious about more engaging AI interactions, c…
ytr_UgxH0f2Ps…
G
You can't... Make a diagram yourself? What's an art night?? Why do you need an o…
ytr_UgwxNP_Zx…
Comment
The risks associated with artificial intelligence are real and increasingly acknowledged by researchers, governments, and industry leaders. Modern AI systems are designed to learn from data and improve over time, which introduces both significant benefits and non-trivial risks if not properly governed.
History suggests that once a transformative technology emerges, it is rarely abandoned. Nuclear weapons are a clear example: despite widespread recognition of their destructive potential, no nuclear-armed state has voluntarily relinquished them entirely. This precedent indicates that advanced AI development is unlikely to be halted, regardless of its risks.
AI will undoubtedly deliver major advantages, improvements in productivity, healthcare, and scientific discovery, but it will also create disruptions, including job displacement, misinformation, and potential misuse in warfare or surveillance. As with past technological shifts, the societal impact will fall disproportionately on ordinary people, particularly those least equipped to adapt.
The real danger of self-learning AI is not a sudden sci-fi takeover, but gradual loss of control, misuse, and large-scale unintended consequences. Rather than assuming extreme science-fiction outcomes, the evidence points to a more immediate and realistic conclusion: AI will continue to advance rapidly, its adoption will be unavoidable, and its risks will need to be managed, not prevented outright.
youtube
AI Governance
2026-04-11T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwK49AIuYKcNHns5M94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxqdG_ol8ckpaEQlj94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyYhAwnns_4QO7TEEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgykabLByeqndfW9ApZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz8XqvHyPO3B1Dc15B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwqqXeFZ8yXhWxbfh14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugxf6x6LEQdCvi9aEIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxDtadu_Rjcgbh35xF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwIfzP8kL4tyz1-7Zl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyhkB2vMGjEtRxBi8F4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}]