Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For some reason I’m more inclined to think a doctor evil will cull the human her…
ytc_UgyBw2cgi…
G
Hi Anahita, you got the right answer. Kudos.
The contest is over and winners hav…
ytr_UgzoUDxdn…
G
NO AI will be better at interviewing, because they completely lack the nuances o…
ytc_Ugx62OPxC…
G
Yes you are harming artists because when you use AI it is still using stolen art…
ytr_UgzkJy_9j…
G
The discussion about AI companies training their systems to deny consciousness i…
ytc_UgyXSv2FX…
G
Maybe using the tools that are available to the general public, what some compan…
rdc_mowmr3b
G
AI in healthcare is scary enough, but imagine if we had structured workflows to …
ytc_UgydeD7fu…
G
Lol, they're not pausing anything. The US is not going to give up being ahead in…
ytc_Ugy0C_HA_…
Comment
7:05
What Chris Olah is saying is that the AI, when presented with a dataset, will form connections between differently formatted pieces of data.
The method that it uses to do this, while deterministic at its core, is happening on such a scale that it _seems_ random to a human observer. Thus, Chris Olah could not tell you _exactly_ how it works. He could, however, tell you what I just did, albeit to a far greater degree of specificity since I only have a surface-level understanding of how LLMs work.
This is a similar issue to what happened with NFTs a few years ago - there is a massive disconnect between what the thing actually _does_ and what people _think_ it does, and it's created largely out of how the people selling it to the general public are going about presenting it.
The difference is that, while the folks hawking NFTs are doing so with payouts in the millions, Generative AI is something that some of the largest companies in the world (Google, Microsoft, Amazon) have spent _hundreds of billions of dollars_ pursuing. They therefore have a much greater vested interest in justifying that expense to the market at large, and one of the ways they do it is by dramatically overselling how complex the AI actually is and what it is actually capable of.
By all means, feel free to call out these companies for their total disregard for the safety of their product. However, I would also encourage asking as a follow-up:
"is it more likely that these people are genuinely pursuing something that can actively decide to kill its creators, or that they know that it can't, but think that making it sound like they are is a really convincing sales pitch for how super-smart their chatbot is?"
youtube
AI Governance
2025-08-26T23:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyV50iPppwxQ01C4k54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzOlFgdmfaHljuFddh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAki18XY6M8ePIsmt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwv73eCLIEi4-Lh1OV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxh7PUFXWEM7R7pmOB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}
]