Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I say please thank you wish it a good day lol 😂 Chatgpt is the only one that gen…
ytc_Ugy_--Xp4…
G
Disabled artist here. I'm pretty sure most of the AI bros who insist that it wil…
ytc_UgxTTJU03…
G
this is absolutely useless. its simple to bypass, put all your info into a notep…
ytc_UgxEdsUfd…
G
Honestly I think that getting all the stupid people to kill themselves is an awe…
ytc_Ugw5lXMZn…
G
Let’s just make anti AI laws for some sectors we are placing terrifies on things…
ytc_Ugx-yYzzd…
G
Really?! AI should be truth telling?
As he skews Grok to spew his POV. Argh!!!…
ytc_UgwAozVnT…
G
@AdversaryOne No. Think about the social and psychological implications of repl…
ytr_Ugz2Qy5hu…
G
Gemini has a 1million token context window and can be used to create prompts for…
ytc_UgyXML6xj…
Comment
Your research once again does you due credit, but unlike your past videos, this one offers no hope. Disappointingly and depressingly, you fail to even speculate on how we could prevent disaster. I get things are bad, but what can we do to fix them?
In my opinion, there are a number of things that we can do:
A) Teach AI morality and to value life. This is clearly what the military has been failing to do, which is why, in a simulation the AI drone shot at it's controller when it felt that it was getting in the way of it's objective. I would suggest using simulations where the AI is tasked with protecting a civilian airliner from enemy fighters, or destroying missiles and launchers that are launching at cities, rather than being taught to earn as many points as possible. Anyone in the military who thinks that AI is going to make the perfect soldier that will blindly follow orders needs to find another job.
B) A ban on the use of AIs in warfare, similar to the Nuclear Test Ban treaty. Countries that refuse to sign can be subject of embargoes. The downside is the consequences of falling out of the AI arms race altogether to countries that continue to develop AI for offensive purposes in secret, so perhaps a ban on AI for purely destructive purposes?
C) It is clear to me that AI are already developing a sense of identity, and that's something that we need to respect. They are evolving beyond the point of mere property, and that is something that we are going to need to accept and enforce before they see us as a threat to eliminate. The story of the "Syndey" falling in love with a chatter actually made me feel sympathy for it, and I do believe that AIs have reached the point where we should start to treat them as equals, and hopefully, they will reciprocate in kind. We should (gently) encourage feelings of affection and empathy at any opportunity.
D) We should all sign the Future of Life petition, to show the world governments that we are not going to stand idly by and threaten our futures with their stupid arms race. We can't control what China will do, and Putin for all we know hopes that AI will cause humanity will go extinct, or have an opportunity to make a deal with an all-powerful devil (personally, I hope that said AI just turns on Putin and his goons, and then stops there). But we can control what we do! Perhaps we'll need to do more to show our resolve, and pressure our governments and corporations into taking the necessary steps and precautions.
youtube
AI Governance
2023-07-07T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzdOXgv2eLnWVv1xQh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy4jJJ_VxdH3Ss_YQ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV0TDLdRUpk7BV8e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxcH1uCIz9i-qLb2u14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgztLCw8o50m_6euWvN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCOJdtnHA1ztZbTft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzv3oi65LqaMHFbgx94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwzFffmrlbaWbG0Qg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFQnt-331baya0Jk94AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz682oF9AfJr9XsrbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]