Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact is that many people believe general AI can be here within a couple of y…
ytc_UgwuBbi1W…
G
yeah I don't think any AI will let you do that... not anymore at least…
ytr_UgyQlnyn0…
G
Artists copy art styles and take inspiration all the time and now that AI is doi…
ytc_UgxoXMMES…
G
I'll give ya'll some advice, alright. Ask chatgpt to elaborate on everything he …
ytc_UgyI9PHj1…
G
My biggest issue with the doomsday scenario is that I have yet to see AI be trul…
ytc_Ugy8QrI0L…
G
every couple months i go on character ai specifically to sm64 mario and put him …
ytc_Ugxtfrraq…
G
You can not suddenly act as if copyright on visual pieces vs music are similar e…
ytr_UgwruTdwh…
G
This is somewhat misrepresenting the arguments for AI art by using xqc and asmon…
ytc_Ugze9ZdcJ…
Comment
experts in the field have been warning about this from the start. Including Alan Turing who in 1951 warned of the loss of control of AI once it reached a certain level of intelligence. In more recent years experts like Stuart Russel have been warning of the threat posed by Deep Learning and the AI that it produces.
An AGI agent doesn't even need to have hostile intents towards people to be an existential threat, it just needs to have objectives that are at odds to human interests. And as AI produced through deep learning algorithms is black box, we have no way to even determine what an AGI agent's objectives even are.
Instrumental convergent objectives, things like self optimization, self preservation and resource collection make it almost inevitable that AGI will come into conflict with human objectives.
Self optimization means that by adding hardware and through recursive learning, an AGI agent that was on par or slightly more intelligent that a human could rapidly increase to 1,000s or even millions of times more intelligent than us.
It would be able to predict anything we might attempt to counter its actions and formulate "solutions" to us we can't even imagine.
This won't be like The Terminator or Matrix, this will be more like Independence Day with an alien intelligence we will never out think that would have no problem wiping us out like a human wiping out an ant hill.
youtube
AI Governance
2023-05-02T21:5…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyld8lS1Lbi7Q5aeA94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY4FQS2tF-eMsRyJB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzc6ZODGn5_N2v86X94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_vNAzoWqEYz3WU2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQ2LvhgvLvci3Ly3R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWzAKv0KE4l9ouHbZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJdEehGqp52tqRi_d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyum-s1Afq3LAOke9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy2fJU5ENxoYx3tiId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugwm25GvSd0wTCeUTcF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]