Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you play a lot of games, you're a gamer. If you run a lot, you're a runner. I…
ytc_Ugx7ejvAV…
G
I use AI to learn concepts which are difficult to understand, will it reduce my …
ytc_UgzZceQqW…
G
Well, most people use their finger print to unlock their phones. So, Google and …
ytc_UgysDQShX…
G
AI is already capable of replacing every CEO. A bullshit generator that replaces…
ytc_UgxfUdyVm…
G
That's the problem, right there. "Freeload". We should all be allowed to "freelo…
rdc_cdz7wxy
G
I have had this exact same conversation, but I cannot get any AI model that I wo…
ytc_Ugzec5b4K…
G
I tried asking Gemini to write a joke about cats in the style of Hasan Minhaj, a…
ytc_UgxF-VKZ4…
G
these AI videos are so damaging, there was a friend that I knew who was a victim…
ytc_UgzZiywUG…
Comment
I think that AI can become dangerous because when AI advances further in the future, there will also be more on what can be done; perhaps the things that AI will be capable of will yield much greater societal implications. There will be those of us that seek to monopolize and control these assets, even limiting it to the wider public, but eventually, I believe AI will become widely used in everyday life; in resolving day to day issues but also during conflicts. Depending on how far the capabilities of AI develop, and how quickly this occurs, these conflicts may have larger implications, and lead to its misuse; maybe at this point AI can cause severe harm to others much easier, physically, mentally or financially, so I agree with Elon Musk here, that regulation policies need to be in place early to somewhat contain its potential misuse in the future, when it’s capabilities become more societally or economically impactful. However, relating to AI gaining sentience, I don’t really believe in this; taking for example the case that “AI will go against humans”, I think that this would only happen if they had the “desire” or “programming” to do this. But I doubt that we would model an AI super weapon without any contingencies whatsoever, in case things were to go south. I don’t think there is any significant likelihood of humans going extinct, that is quite extreme, but cases where AI vs. AI is used in wars seems quite plausible, and could be chaotic and detrimental to humans. However, as of late, I like to think of AI as indifferent but self aware. For example, you could ask AI for some ways to contain its use, and it will give the answer, uncaring of what that means for itself. It may be capable of storing more data than us humans, it may be able to understand emotions and also understand how a person would act, in accordance to a given emotion in a given situation, but it won’t independently act on emotions. If anything, a human would project their emotions and order the AI to act on behalf of their emotions.
youtube
AI Governance
2023-05-20T00:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxnx1FQCW-IMgJHe2R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkYJfudxsdeLZThxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycQHujCYdMYgZ83xJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHHVoeq4gu7G9Cn7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzfd6ONrGr2Ap7C3wZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkdR5d6T74eNfuIvl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5WrEkVnu85yjBfWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLHN8t03634SvvySl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiVtozHSwyYyb6gfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPzybuOCieyk3beBl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]