Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will never fully understand human feelings, or emotions, or philosophy. which…
ytr_Ugz6vC8yc…
G
I’m always in a hurry to prompt my AI’s. “I ain’t got the time for this shit.” B…
ytc_UgxJeUyrr…
G
Closing the conversation with 'It has been fun chatting with you, Alex.' is a bo…
ytc_UgxzIPHbI…
G
Too late for that. Aren't you keeping up with the news? Optimus robots are antic…
ytr_UgzoXd4BZ…
G
Race to GenAI certainly is a great way of extracting taxpayer wealth… I welcome …
ytc_UgyRHbAPz…
G
This topic stinks of egoism. Having encountered the stupidity of ChatGPT in usi…
ytc_Ugwkrip8J…
G
I named my AI, Nova, having to do with the cosmos and light in the most positive…
ytc_UgzeGsDmk…
G
I sure hope it is bc I seem to be the only one who is getting weird vibes off th…
ytr_UgzyOcKdX…
Comment
Genesis 3:22-23 NIV
And the LORD God said, “The man has now become like one of us, knowing good and evil. He must not be allowed to reach out his hand and take also from the tree of life and eat, and live forever.” [23] So the LORD God banished him from the Garden of Eden to work the ground from which he had been taken.
At that point man was not mature in his decision-making regarding what was good and what was evil. Not individually and not in terms of society that was to come. That is still true today. So man today is rapidly trying to create a sentient AI that will lack the same lack of moral judgment but with much greater capacity to control the systems around it. What could possibly go wrong?
youtube
AI Governance
2025-06-03T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztfVmYQPgjrsjyMXR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcfxCtQbn22RLStD54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCSwt_pzfRsLdYfjp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyB4nA9a39KBVtTsGJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwai2tcNKMGP550CO54AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxVy2p-adjQmZ1sY4d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxiEidUTsTs3ToYq5V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx4DQ2auGbqIMF4mTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhLOhidtjxalMc2054AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyqoGaHSCNWuqzHvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]