Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I LOVE THIS VIDEO !!! this is the future.
You forgot about the corporations s…
ytc_UgwRHbD0R…
G
Great video! Those ChatGPT integration tips are super helpful. Just wanted to sa…
ytc_UgxEnIXyZ…
G
Finalement, les fonctionnaires qui travaillent avec Albert feraient mieux de pos…
ytc_UgyyCKhHp…
G
Not on the same level, but when I started a role I was tasked with doing databas…
rdc_hkfsb5c
G
No it absolutely shouldn’t. First, without a global ban that is strictly enforce…
ytr_UgyGPnSKj…
G
It's interesting how movies like Terminator and I, Robot shape our perceptions o…
ytr_Ugw8tNVGX…
G
I think those who go after artists for not liking AI do so because they know wha…
ytc_UgyZptrD9…
G
How do we get the government involved with shaping the development of AI for pub…
ytc_UgwgEnnBw…
Comment
I live in a 2D world, so I don't really care about things like this. However, if what's discussed in this video is real and not just a lie, it seems like the world will be much more interesting in the next 10 years. I can't wait to see what the world will look like. And honestly, I'm not too afraid of things like that because we humans are capable of evolution, so eventually we'll evolve to adapt to the circumstances. For example, the time of my grandparents and the time of today's youth look very different. While it may take time, we'll eventually be able to evolve to a state where we can adapt. If we don't, then we'll just be doomed, so don't worry too much. It just means our era is ending and a new one is about to begin. The cycle of life has been like that for a long time. Now I even wonder, "Is this how our ancestors felt when someone told us that humans could create light from electricity?" and if we succeed in creating a super smart AGI, it should be smart enough to know not to mess with humans, we humans are not very smart, not very strong but we can rule this world and create AGI, so if an AGI that can be said to be smarter than humans thinks that destroying humans is the smartest choice it can make then a super smart AGI doesn't seem that smart. I don't want to sound religious because I'm not the religious type, but even people like me still believe that there are beings higher than me like God or something like that. but if an AGI that is said to be smarter than humans and somehow thinks that destroying humans is the best choice it can make, ask the AGI is it really worth choosing that choice? .and is AGI confident enough to be able to destroy all humans in the world, because if he fails then a big war between humans and AGI will definitely happen, and in the end one of them will definitely be destroyed, that too with no small cost and casualties, and even if humans lose and no one is left, is AGI confident that he can rule the world because even though humans now rule the world but humans know that there are beings higher than them, so if AGI succeeds in destroying humans, is AGI confident that the higher beings will let AGI? .I know it sounds unreasonable if AI can understand something like this, but if AI is really smart enough, it should be able to think that fighting humans is not a smart thing, the risk is too big and it is really not worth it. Or try teaching AI about religion, like humans who use religion to control other humans, who knows this can also be used to control AI so that AI knows that even if it can and is able to destroy humans, the risk of it being destroyed is also very big. For me personally, a tool no matter how smart or great its form will still be a tool, what makes me worry is not the tool but who controls the tool
youtube
AI Governance
2025-10-17T23:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7weKW-xf0TdZsCPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOtIJTwz7zDHWKZD94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwY9j7z6dBbRadsglp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZmoUetxSqrE-fFMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyk5DzVnBL7-NZRWup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxRBemPZCAUwkLv9wF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxtqyzFVxPP_C__McZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUIsQyGUnRhuM8SgV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzO7YChENAXkAJToN94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzAXVm_OqjhSa09qZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]