Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The second robot staring 👀👀👀👀👀👀👁️👁️👁️👁️👁️👁️😂😂😂😂😂😂😂😂robot be like ummmmmm let me …
ytc_Ugxff6oSU…
G
AI bros are as smart as the people of Talca (region of Chile), with the "world's…
ytc_UgzDa0sWa…
G
AI is a wonderful concept IN an ideal society, like or closer to Star Trek. Unf…
ytc_Ugx_NX4Qk…
G
I rant because I feel like people here would understand:
I have a deep hate of …
ytc_UgyiYNsCQ…
G
Clever human: "I have introduced a rule in the config that prohibits making pape…
ytr_UgzE5c4Fl…
G
He thinks character ai is the freakiest app wait till he knows about chai 💀…
ytc_Ugz-DTS6L…
G
The skills necessary for structuring and writing a paper will never be useless s…
rdc_my4ygn6
G
Any AI intelligent enough to pass the Turing test is intelligent enough to fail …
ytc_Ugz8AtOGH…
Comment
TL;DR: Roman’s risk model oddly confirms the world the New Testament predicted. He’s right that we can’t predict a superintelligence; Christianity adds that we also can’t audit God’s reasons—but we are given enough light in Christ to trust His goodness and to face the future with courage and hope.
Hot take from a Christian in tech: The scenario Roman outlines—unpredictable AGI/superintelligence, inability to “turn it off,” mass job displacement, coercive infrastructure, persuasive synthetic “images,” and widespread deception—tracks with 2,000‑year‑old biblical forecasts that were humanly inconceivable when written (e.g., Rev 13:14–17; Matt 24:12,24; 2 Thess 2:9–12). That’s not a proof, but it is a Bayesian nudge that the Christian story anticipated a world like this. Roman is right that lower intelligences can’t predict a higher agent; he just doesn’t extend that logic to God. He infers “simulators” with poor ethics from suffering, but Christianity claims God has revealed enough to render suffering intelligible without trivializing it: real creaturely freedom (moral evil), a creation now “subject to futility” (natural evil), and a God who entered history in Jesus to defeat evil without destroying evildoers—first at the cross, finally at the renewal of all things (Gen 1–3; Rom 8:18–23; Heb 2:14–15; Rev 21–22). See Roman’s own premises about unpredictability, unemployment, and simulation confidence in the transcript.
Also, “simulation theory” isn’t a swap‑in for theism. A simulation posits an in‑universe artisan; Christianity posits a necessary, self‑existent Creator of all reality. Conflating the two is a category error.
If this conversation left you feeling hopeless, here’s the Christian reason for hope: history is heading somewhere. Justice won’t be deferred forever; evil, death, and exploitation are on a clock; and the end is not heat‑death but union with God and a restored earth (Rev 21–22). That hope isn’t wish‑casting; it stands or falls on a testable claim—the resurrection (1 Cor 15). If you’re curious, read Mark’s Gospel in one sitting and ask, “If Jesus actually rose, what follows?” If he didn’t, ignore me. If he did, there’s solid ground for moral realism, unborrowed human dignity, and a future no AI can erase—even if 99% of jobs go away.
youtube
AI Governance
2025-09-07T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOUM93bVOeXVSGSwd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzkph2BQmmocInZWF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGJvTuMuKzwv_I4CF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwchKByn6zNU5xejjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwMCiRH1jCI-btWQN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJ9IBnJAq3Exv04GB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgymqNSJRoqPekGlkCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy1qCkOD4ijx9bs0o94AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"approval"},
{"id":"ytc_UgxIEbY-pR9lSzUx3uV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySfI5oVPIfQ8XNzAl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]