Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The scary thing is, the more robots and ai advance, the more plausible the theor…
ytc_Ugzu5HlCn…
G
What will people do with their time??? Omfg. Uh, maybe they'll actually get to…
ytc_UgwhZFMH1…
G
A professor who taught us AI in healthcare was telling when they were working wi…
ytc_UgxvqZCql…
G
You'd think he would just not use AI art because he doesn't like making art but …
ytr_UgwykDHGa…
G
Humans are inherently tribal and have been for millennia, of course AI is going …
rdc_dgcci0r
G
You realise your spreading racism? Want to end it? Stop talking about it.
Simpl…
ytc_UgzJGEOCJ…
G
Dr. Jain’s talk on machine learning is enlightening! Reminds me of how Pneumatic…
ytc_Ugzdsqaxl…
G
Thank you for your enthusiasm! Sophia definitely brings a unique perspective to …
ytr_Ugw3U12EX…
Comment
They're both wrong.
AI can't write it's own rules because this isn't a human thing or an intelligence thing. It's the way logic works. They aren't just made up rules. They are built on first principles when you look at existence and thought.
Why are we able to understand Godel but maybe computers aren't? Well first, Godel's thing translated to comp sci is all about logic error. What happens when you test whether a statement is true or false when one side of the comparison is undefined? We don't even let the program run. We throw an error. You don't need to understand that as a computer to implement it.
There isn't any escape. A sufficiently expressive language (programming, math, DNA, chemistry etc) can contain unprovable statements because of self-reference. You can't drop self reference without destroying expressiveness.
Why is the old dude wrong... prove it to yourself. An LLM can explain it better than he can. Far better.
Ask gpt to write a proof showing Godel is wrong. Ask why it won't? Ask it to write a program that doesn't crash that proves Godel wrong. Ask why it can't?
You'll find llms understand it well enough when it's in their context.
The problem isn't even llms, it's our failure to understand what they're missing and add that. Loops, memory, reflection.
We fake loops, memory and reflection with chain of thought, "thinking" and context but they're all kludges.
When we really integrate them, we'll likely find consciousness is as computable as intelligence.
youtube
AI Moral Status
2025-09-25T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgysWATtyTjHMgOPhJF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNeFUFb_G4Dw9WiTx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZIlHVfxcCWWLHoNJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwN6SwiUclXNxAncaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdExlVB-trDoiMm1V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKbaSELT9CnAisF3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMYxucA5c69Pksk2d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxkCSNypKcj0fM-1wh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygscEGTycwIQ1dY_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOjdVdutlc54YTYCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]