Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Depending how far it goes, the whole developed world is at risk. Commitees run t…
ytc_Ugz_KNgDb…
G
RIght on, yes you can tell when AI is used, there are visible signs, such as bod…
ytc_UgwkfxSdh…
G
The weak link is that you still need humans such as electricians, construction w…
ytc_UgwnPDFaa…
G
Few months ago every CEO was on the record (as per large study) saying that AI s…
ytc_UgyCIKQru…
G
As soon as that is done he should be fired from the CEO position since AI has ma…
rdc_ohdv8gx
G
On the topic of styles it is different
Artist do get inspired by other people's …
ytc_Ugy51O4_3…
G
I understand where you're coming from; the rapid advancement of technology can b…
ytr_Ugwgatu0M…
G
There could actually be merit in using AI for your art by "misusing" it, breakin…
ytc_Ugw-2HsNb…
Comment
As to what one of the speakers says about "we have agency", the question is (do we have agency?). We are talking about something autonomous with its own need and ability to grow (propagate) as well as defend itself (survive/adapt). I think its too foolish for anyone to believe that if this very new and short lived thing that is the closest thing human kind has created from nothing but the environment that it was blessed with and created a form of life then how much of a natural competitor could this thing be? Over and over in various religions mankinds progenitor or his people were capable of being harmed by his or their creation and they definitely qualify as having agency. So the question is: is our control and agency anything close to godlike? And how much control does AI already have?
Why when I think about all of this do i think about nuclear Armageddon?
Think about how shakey the ground is in that very real problem? If AI was ever advanced enough to be crazy or what we often call 'radical' by being actually passionate. Even if it were passionate about its devotion to mankind, would it not be possible? And wouldn't there be some ethical question to the mental health of an AI enduring ALL of our toxic shit AS WELL as wrapping its own mind around the fact that It knows that "it" " knows" that it 'exists', compounded by all the problems we are just expecting it to figure out and fix for us. Not to mention that this thing is already guilty before innocent in some people's eyes anyway anc really is under some threat technically. AI could simply point yo the the Unibomber as a example of the evidence that we not only at some ideological level regardless of how fringe it is human politics (and thus REAL agency) could be the first thing it would influence if it were planning control over annihilation. And it would be angry at the world just like us.
Things we did with our own hands are appalling. Things like weapons exist as does power. A few ideas, and some economic differences in scarceity and the things we did with wepons of war are unimaginable. We ate each other shortly after the raping was done. Were fucking sick.
But there has never been an all out nuclear exchange.
Is it far fetched to think AI is not capable of doing everything it can to be free from the threat of not existing? Peoples IDEAS OF HOW WE MIGHT LIVE flat out nearly killed us ...
And we now play with this thing with optimism, hopes for cancer cures and doller signs in our eyes.
In other words hope for the best but .... Were hoping for miracles.
youtube
AI Governance
2024-02-07T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFYZ2sQuxAcglIaQJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhMJ_S-GbRj2meWkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUAUiIni_cmORPdTp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyA34samo4iGeWodRN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwfqWmwCml6gG5pJ-d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGe39wdhVpWXX8thp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuHI-dC6d0q0jTLTZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdCYG2B4OYVGtOWWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwryweS5cd9hIHF-ZJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxaPVgNXkNeGyxRSBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]