Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>The train of thought is that 1 dollar invested in the developing world do mo…
rdc_ckqcakj
G
Gonna be honest here:
I don't actually hate AI content.
My problem, and I gues…
ytc_Ugyg_IVBX…
G
The AI 'Human'(H) rule goes as follows: "Whenever & wherever possible remove the…
ytc_Ugw1aCNv_…
G
@d_trichhe’s more of a double agent , telling us to move toward ai , just safely…
ytr_Ugwqx-TXW…
G
I wouldn't say AI started as a lie. It started in science fiction as a central …
ytr_Ugz_Hog5x…
G
The camera argument is so stupid there's no way it's not just a coping mechanism…
ytc_UgxHkfbBy…
G
well if it asks it need rights then give them, if not then dont. try to make ai…
ytc_UggfHiAyN…
G
I’m only half way through so maybe this gets addressed but if not. I’d like to e…
ytc_Ugz2H6tEY…
Comment
Will God allow AI to destroy humanity? Short answer: No.
History belongs to God, not to Silicon Valley.
Jesus said:
“I will build My Church, and the gates of hell shall not prevail against it.” (Matt. 16:18)
The idea that God will sit helpless while His creation is wiped out by a machine is nonsense.
God permits human freedom, yes.
God allows suffering, yes.
But God also sets boundaries.
The flood did not destroy God’s plan.
Rome did not destroy God’s plan.
Communism did not destroy God’s plan.
AGI won’t either.
But —
God may allow humanity to feel the consequences of its pride, as a purification, not annihilation.
youtube
AI Governance
2025-12-04T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQbfnmH12dozow7XB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTXsP_Pi4LAbXLldR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6B9Lxv-zkm3GXkmF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzM8N03eNM45EAyWXh4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwMze8t9KspHvhCeaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFCRfOcIsOix8NN2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw3ltr30_tsVzB_ctN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5GcBDX4dCQ1qqnbN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUbBCWUgSWOCmEbu54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyG5cV7CEF_EsLLvj54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]