Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many said the self-driving would be better than humans but that is clearly not t…
ytr_UgxEo29BB…
G
There is absolutely no reason to freak out! People that spend money on ai art ar…
ytc_Ugzt6xDbz…
G
Nope. Don't care.
Every one of these artists are happy when automation comes for…
ytc_UgyINLRYq…
G
He’s like a the computer nerd in the movie about sentient AI robots taking over.…
ytc_UgzQfVI_f…
G
You KNOW someone is going to make that robot who will calculate all possibilitie…
ytc_Ugh96lbc5…
G
Yes, AI weapons can be dangerous. Just like any weapon.
I think that banning a t…
ytc_UgxvIoyuO…
G
I was told that i had to write up every quarter the use cases that i had found t…
rdc_ofi3bok
G
Yeah, it's two different people.
Because those chat bots are actually professio…
ytc_Ugz0X_pU2…
Comment
I found this conversation incredibly thoughtful and honest. Thank you for sharing it so openly.
One perspective I would like to gently add is that our purpose as human beings does not ultimately come from our jobs or our intellect, but from something deeper: our relationship with God. As a Catholic, I believe that no matter how powerful AI becomes, our dignity and mission remain rooted in love, service, and faith.
If we hold fast to that foundation, we do not need to fear what the future holds. Even as we work to steer AI in the right direction, we must remember: "The gates of hell will not prevail against the Church." Collaborating with AI, as I have done in creating works of faith, can lead to beautiful things when guided by truth.
The real danger is not superintelligence, but losing sight of who we are.
youtube
AI Governance
2025-06-16T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxr6fyHTZOBODgZwSx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzP4iv1Rz8nD6jbSMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyWZ1ABfkOF_FwQu3Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwpoxZbYFkYMFz0ejN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZoIIx12D3wjWYhsZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzRjlqeZF4unDbCpcF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyaBRCXYKen3UjWySR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDn1VjbAIm7QOfVy14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIhnw_SuAtHmli6MN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAdkneji7-9amxnip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]