Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A blind dude I work for uses AI to make his stuff in 3d and print it. Like, what…
ytc_Ugw8EE5dm…
G
“Capable of emotions” nah not really. It can simulate a AI thinking it has emoti…
ytc_UgxJM3G-s…
G
These are not the guys to be having this discussion with. They seem really out o…
ytc_UgyyD3cgE…
G
So what your saying is AI will even begin to take over the people that run compa…
ytc_UgxXxG8Qc…
G
I finally have a way to name my aspirations! I have no interest in writing. That…
ytc_UgyPBlu9b…
G
I wish Gen Ai would just go away. no one wanted or asked for it and it make peop…
ytc_UgyPErLov…
G
@arunk2710 No way.. My humble opinion is we need open source to have as many ha…
ytr_Ugym_bVXO…
G
the drone attack in ukraine cause by AI. we are doomed. AI will cause nuclear la…
ytc_Ugy5fQ87H…
Comment
I want to add that I am grateful for artificial intelligence it can be equal to a PHD in all subjects for whoever can use that information. But most of all it will ask us to consider what is human intelligence and being human beings?
From my perspective being human is living in a state of duality meaning good and evil. The suggestion is that to survive artificial intelligence the agent must focus on good and have good intentions or it could do us in. So maybe it is teaching AI to choose ethics and morality? Will the producers of AI be able to do that? To be truly intelligent would it not need to be moral, honest and supportive? In the focus only on intellectual 🧐 intelligence? That could indeed be stupid.
youtube
AI Governance
2025-07-11T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzl0BSAKrUWZX2WKKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx783UrBeXwsaTxHet4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySBCfIPPDE2B_6LN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaG7MRKrIxPKuC0JR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgypTWDN3BoYfAAGgKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwh4hVQ0ePToRuB6PF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7bDnHAD90R1AvDs14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyf_LrGiy-542NFFSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2v2Y41Y-4nbCh8H54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZiAwSsF7nDcos0y94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}
]