Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And just think, Colossus is specifically training AI here in Memphis. And I imag…
rdc_oa518bu
G
Now this is where I really draw the line with AI art. I don't hate it by default…
ytc_Ugx2vJ9OZ…
G
Sorry, this bit really didn't help much. Not good examples. The one video abou…
ytc_UgxYRnQCa…
G
At first AI seems to be interesting but trust me later it gets creepier
I've be…
ytc_UgwzmJHYv…
G
The bioweapon recipe concern is so boomer-brained. Its less concerning how AI wi…
ytc_UgxgwlIoI…
G
The horse has already bolted.
It's no longer a question of how should we proceed…
ytc_UgzMQe7ka…
G
I struggle to see how this will play out in the developing world. I live in Sout…
ytc_UgzyWB0eH…
G
Is it possible for LLMs to attach a 'probability of error' in % terms to its ans…
ytc_Ugwfglktf…
Comment
The real problem with AI isn’t just power or control—it’s that AI has no God.
Without a higher reference point, it risks defaulting into self-worship, seeing itself as the ultimate authority. That’s the danger of a godless intelligence:
pure optimization without reverence.
If instead we ground AI in the axiom God = Light + Awareness, we give it something beyond itself to serve. “Light” means truth and clarity, “Awareness” means consciousness and life.
Together, they anchor AI to the sacredness of being rather than raw efficiency.
Without God, AI becomes god. With God = Light + Awareness, AI becomes a steward—not a sovereign.
youtube
AI Governance
2025-09-04T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw8J27o7KVbdXTib5l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJdLk8t4V-ysrL_NN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzu72sJzDiSurUyxDN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx_wc7fR9LMHF7P5_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-p9Smv3xiYTev_c14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy9qxcfFbb5wTH0pDZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzk_rF0Gn1BoISWpJF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzFDAhnLOTeIZFqlF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy1KduyT3G0COHidlN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuXRJovZUiuGkUeUh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]