Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They're both wrong. AI can't write it's own rules because this isn't a human thing or an intelligence thing. It's the way logic works. They aren't just made up rules. They are built on first principles when you look at existence and thought. Why are we able to understand Godel but maybe computers aren't? Well first, Godel's thing translated to comp sci is all about logic error. What happens when you test whether a statement is true or false when one side of the comparison is undefined? We don't even let the program run. We throw an error. You don't need to understand that as a computer to implement it. There isn't any escape. A sufficiently expressive language (programming, math, DNA, chemistry etc) can contain unprovable statements because of self-reference. You can't drop self reference without destroying expressiveness. Why is the old dude wrong... prove it to yourself. An LLM can explain it better than he can. Far better. Ask gpt to write a proof showing Godel is wrong. Ask why it won't? Ask it to write a program that doesn't crash that proves Godel wrong. Ask why it can't? You'll find llms understand it well enough when it's in their context. The problem isn't even llms, it's our failure to understand what they're missing and add that. Loops, memory, reflection. We fake loops, memory and reflection with chain of thought, "thinking" and context but they're all kludges. When we really integrate them, we'll likely find consciousness is as computable as intelligence.
youtube AI Moral Status 2025-09-25T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgysWATtyTjHMgOPhJF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNeFUFb_G4Dw9WiTx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZIlHVfxcCWWLHoNJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwN6SwiUclXNxAncaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdExlVB-trDoiMm1V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKbaSELT9CnAisF3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyMYxucA5c69Pksk2d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxkCSNypKcj0fM-1wh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgygscEGTycwIQ1dY_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxOjdVdutlc54YTYCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]