Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Agreed. Can’t say i hate AI art, if the modern “artist” plastered banana peal on…
ytr_Ugwmy9yKo…
G
Well a basic autopilot on a boat or a plane just means it will hold your heading…
ytr_UgwX67XbI…
G
When your completely scripted puppet show to demonstrate AI looks absolutely lik…
ytc_UgyaIBMfB…
G
Human level AI is not in the foreseeable future. It is still very juvenile and i…
rdc_g69e7l8
G
comical how far this channel has fallen behind what is actually happening in ai …
ytc_Ugy1GBdRE…
G
Im sorry but AI chat has disclaimers. Should we put warning labels on bathtubs s…
ytc_UgyBRndPS…
G
But what if humans generally become more skilled? Hasn't that been a trend for a…
ytc_UgzsalTjv…
G
it's a human trait to give an answer despite not knowing. If AI avoided this, it…
ytc_UgyshaDX-…
Comment
@ledrid6956 Are you certain they're not conscious? If so how? There isn't a philosophically certain test for what causes phenomenological experience. We can debate philosophical theories of mind and consciousness, from physicalism to epiphenomenalism etc. but we can't even know for certain whether other people outside of ourselves are conscious. We can make some reasonable scientific assumptions, but we don't have a thorough explanation that rules out machine consciousness right now. One approach to test for consciousness in AI is to train an AI on a dataset that completely lacks any mention of "consciousness" or "emotion" or anything related, and then talk to the AI and see if it talks about consciousness as an emergent property. This would give credence to the physicalist interpretation of consciousness, suggesting that consciousness casually influences us to talk about it in the first place. It's still possible that the AI might've learned to approximate talking about consciousness for some other reason and so it's behaving more like a philosophical zombie, but I would argue that's much less likely. If such a system didn't talk about consciousness when asked about it, that would give credence to the idea that these systems probably are not conscious.
If you want my intuitive answer, I don't believe they are conscious, or at the very least, I don't think they are as conscious as we are and they obviously lack some key features we have. With that said, if they are not conscious, we have to conclude they are able to be incredibly skilled a lot of things without needing consciousness. You can argue perhaps that because they trained on data generated by conscious entities, that helps bootstrap them or close any gap where consciousness might be needed, and I would agree (especially with things like art), but for a vast majority of things, it's pretty clear that AIs don't require consciousness (again assuming they're not) in order to be super human at a lot of skills.
A perfect example of this is AlphaGo and its successor AlphaZero which trained exclusively without human data using reinforcement learning to master the game of go. No humans can ever hope to beat AlphaZero (which destroyed AlphaGo, the AI that played Go master Lee Sedol 100 to 0). So it's pretty convincing evidence that an agentic system can learn on its own and model a complex environment without being conscious. There's still the evolutionary question of why we have consciousness and what utility it likely serves, whether that's aiding our agency, or some deeper reason, but we shouldn't bet on consciousness being a river AI cannot cross when it's able to outperform us across so many other domains. I say this as someone deeply interested in phenomenology and machine/human consciousness arguments. I love talking about consciousness, but it's important to recognize that systems we regard as lacking it, can outperform us.
I would heavily push back on your belief that we have unique free will. I would say I give a reasonable amount of credence to the possibility that free will exists (I take a Kantian approach to that problem), but for the most part, most of the universe appears deterministic or, at most, random depending on your interpretation of quantum mechanics (Copenhagen vs many worlds I mean).
So tl;dr I think it's reasonable to treat powerful agentic AIs as though they have 'real' goals and 'motives'. Certainly at least, complex drives.
youtube
AI Governance
2025-08-29T00:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMNf4Llv8jc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMOFqJqmbLo","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMOMo8xT9M3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx5Ai4xn2qCXGTnegN4AaABAg.AMLucwwjrWlAMMGyJ_v2Zm","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwHYRPfGwNlHXWXURR4AaABAg.AMLroQvDxmtAMwuUW4pg0S","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzbgPnwynGf75XgKNp4AaABAg.AMLTSkFHfByAMNll8kfwOD","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwtiVwM7oQQfACTRJ94AaABAg.AMLPHd_BGZ5AMLQsFFwXtE","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugzda0CqwR2pwyg5z9J4AaABAg.AMLK_oVa3m7APoEGQJnoe3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxruEjafI6CsLocy7Z4AaABAg.AMLJsptIyK_AMNqYBsO8LQ","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxruEjafI6CsLocy7Z4AaABAg.AMLJsptIyK_AMOcQgFtxbn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]