Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The worst risk I see is not that we “can't control it”. Nonsense; we've gotten pretty good at controlling them already. The bigger risk is their capacity to control us. Imagine a world where everything you say must be checked by AI before you can say it. We are nearing an age where this applies to every bit of internet content, every comment, every private message and eventually everything stored on every hard drive. Taken to its logical extreme, this extends beyond the “internet” and leaks into every other aspect of society, which will inevitably be monitored by AI systems. “It's to keep everyone safe, you see.” No, it is the total obliteration of the principles of freedom and privacy, which can only end with the eradication of individual expression and diversity of worldviews, including all moral and linguistic schemata. Sounds crazy and fearful, I know, but this is the obvious direction the world is going in, and young people are increasingly unbothered by it, often in fact supporting it. The problem transcends politics and culture; all culture is subject to the dominant culture of Internet Safety in the same way that indigenous cultures have been subject to colonizing ones. Consider that the regulations on AI will be established by the status quo—government, big industry and a handful of powerful individuals—who get to decide which things are “safe” and which are “unsafe” for society, i.e. which parameters of human behavior to associate with risk and therefore minimize. Even if this process could be guided as democratically as possible, because of its all-encompassing power (to apply the law immediately to all scales of behavior no matter how minute), it would be oppressive beyond anything that has existed up to this point.
youtube AI Governance 2024-11-11T19:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz_o_4UNQo_GhkZI294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzSfTeQw4BKcqxzOux4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzINkcMf9qZnjpwVBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyiXASSQXCnxV2XPRp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzETqjCT8linwuvFJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5eRt-bmjhG4GyYA14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxidoirlAXPIubxvzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxr6ngB1oQQxcwIvwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMZukF0GChESvzfyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEaV5XvfIWQtNbWwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]