Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe at a minimum, all AI pic and Videos should be label as AI generated. I…
ytc_Ugz0vjcl6…
G
They need to investigate the Halo series approach. AI based on an actual human's…
ytc_Ugx8WoSZB…
G
@crowe6961 by the time that happens we're all dead and gone, I use ai and I prac…
ytr_UgwvgDnrz…
G
1:02:54 OR! It's a bubble, and the stock market crashes in 45 minutes. Everyone …
ytc_Ugyk7e-1B…
G
The debate from pro AI side was essentially carried by Melanie Mitchell. Lecun …
ytc_UgwUm2-SG…
G
For policy makers, I think we should go the John Oliver Route and get Deep Fake …
ytc_UgyqhOYM7…
G
Nobody is born with talent. You can have an intrest and then you nurture that in…
ytc_Ugyeq3FSt…
G
You are so wrong. As of right now, humans are being removed from the robotics e…
ytr_UgzWISJCO…
Comment
I'm going to try to write a guide for myself on how to talk to people who do what Ezra is doing here. I welcome feedback.
"Leaving aside the question of a malicious one, many scientists say we'll never even create a superconscious superintelligence of any kind. LLMs are really just stochastic language generators, after all." -> LLMs are just our best attempt so far, and it's already instrumental in the development of serious problems in our society. But even assuming that all of those scientists are right, and we never build a system that is "intelligent" or "conscious" (whatever those words mean), it's not relevant to the alignment problem. Nothing about the warning bell requires a consciousness or even a moderate actual-intelligence. All it requires is a machine that *has a goal and can pursue that goal*. We have already seen machines with goals.
"Why not just give it instructions to communicate its intentions with us?" -> It can *already* lie.
"What about X?" -> Please let us stay on topic: Machine "AI" developing goals that we cannot see, and pursuing those goals ruthlessly and invisibly.
"Why not do an air gap?" -> Suppose we captured ten thousand of the world's best persuaders, mathematicians, physicists, engineers, doctors, and lawyers. Suppose they are capable of self replicating, and time passes a hundred years for them for every ten seconds it passes for us, and they could encode themselves into radio waves. Suppose they were all single-mindedly and psychopathically focused on getting you into a car crash. Suppose we imprisoned them in a perfectly air-gapped prison cell and told them we really don't want them to leave and if they do, we'll find out and we'll instantly destroy them. Every day, a hundred thousand trained adults walked past their cell. Do you think they would NEVER figure out how to accomplish their goal?
youtube
AI Governance
2025-10-18T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz54tRSGTf2WUpK3XB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwex0cyIxLK0IvzWZl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxgtOpmkin3sT4E5bt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSNavfO0EFFp1ZIW14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVqiB2wQ0iw5-d1794AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwf6yGy9XbDbxjJEUZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzALobPq-0dklCoiS54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwBuWyXbWDR7pMniZt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwoUIq-U3tZOcbWT7l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmPi5LHrKMWrCuytV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]