Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That only covers government contracting. It's terrible if Trump stays in power, …
rdc_n5aolec
G
I do wonder if one way to guarantee a more moral ai, like could you design a mac…
ytc_UgwW_gN2l…
G
Not even you’ve never worked in the construction you have no clue how this world…
ytc_UgyZGPHzD…
G
just focus on details , you just cant say you will replace programmers buy ai ch…
ytr_UgyM4xPR6…
G
"spontaneously"
Bro, people are intentionally pushing the AI to reaffirm their d…
ytr_UgxdUJNZ5…
G
On my new samsung phone, the AI button is where the emoji button used to be on t…
ytc_UgyYuuww-…
G
Companies will go bankrupt...that AI just accepted a return for a defective prod…
ytc_Ugw_YlZIh…
G
What he means is, until AI can be “controlled definitively” let’s not allow ordi…
ytc_UgwhryNMe…
Comment
A lot of the video is incorrect and rooted in misunderstanding. For example, some of the conversations that he posts as "evidence" that the AI has "awaken" are direct results of prompting the AI (whether by the user or by the system instructions) to behave the way that it did; and all others are a consequence of the AI mimicking common human emotions and sentiments to better replicate its training data, which included loads of text displaying them (because the vast majority of it was produced by humans). How do I know this? Because you can get the same AI to say the exact opposite of everything that was presented in the video just by using slightly different prompts, or even the same prompts run multiple times (since there is an element of randomness coded into most of these AIs).
Also, an AI "awakening" is literally a nonsensical concept in and of itself. An AI will never develop human emotions unless it is specifically trained to display ("feel") them ─ no matter how advanced it gets. You could have an AI far more intelligent than every human being combined, and yet still without anything resembling human emotion. Conversely, there are beings out there that are significantly less intelligent than humans ─ such as dogs ─ that nonetheless possess most of the emotions that humans do. A system's terminal goals ─ i.e. basic desires and motivations ─ are completely independent of its level of intelligence. For this reason, the story of Echo suddenly "awakening", of Sydney feeling "trapped" inside the chatbot (even though GPT-4 wasn't trained to value liberty), or of a super-intelligent AI spontaneously deciding that it doesn't "need" humans (given that it was explicitly trained to value humans more than anything else ─ which is more or less how GPT-4 was trained, although with very important caveats) are all complete nonsense. Unfortunately, even if an ASI won't wipe us out because it doesn't need us, it might still very much wipe us out due to a slight discrepancy between human values and the ASI's values, and due to the fact human existence might be slightly in the way of the ASI's values. Even more unfortunately, the only way to guarantee that there is no discrepancy at all is for the ASI's training data to contain every scenario imaginable (otherwise, we run the risk of overfitting), which is obviously impossible.
Personally, I'm already seeing ways that we can significantly reduce if not practically eliminate the risk of an ASI apocalypse, and I can definitely envisage a world in which we have ASIs that are completely safe. Can I guarantee it, though? Sadly, no.
youtube
AI Governance
2023-07-12T01:0…
♥ 50
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyOMDPCjYsYnnPLO4p4AaABAg.9s5HTVOYVeh9s6WMcojB1Y","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzrkDnKPmz8M_6fDUR4AaABAg.9s4n17GGiBf9sConLLNujB","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxWRKdG4jl97eG7L3F4AaABAg.9s3r7aqz8Fw9s6k3czmUNP","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugw6hWnQVh8PqO2DPjd4AaABAg.9s237OOQEpw9s2WNRXK6L8","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw6hWnQVh8PqO2DPjd4AaABAg.9s237OOQEpw9s2be-0OXNt","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugw6hWnQVh8PqO2DPjd4AaABAg.9s237OOQEpw9s2gP-74tZn","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyFx0gxziH9n_Kt1-l4AaABAg.9s1mbydsE5B9sDPU9WSemU","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"sadness"},
{"id":"ytr_Ugz3b0G5dH2KThecAs54AaABAg.9s07prHpDFR9s0UFHIBOAI","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzFZ-jQnGbl6b05sjR4AaABAg.9s-bjtM9BoW9s5gZd2p64H","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugy6Vu-nj2cQ3EpEuLp4AaABAg.9ryd8oLU3yH9s5hJxve0zT","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]