Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the humans are reduced to just physical properties or interactions then it do…
ytc_UgyEoBVu3…
G
I am neither for or against Ai, but I think part of what gave rise to it is simi…
ytc_UgzB3Qgk1…
G
On the bright side, China is not that dumb to create AI smarter than humans. Who…
ytc_Ugzi_eO4J…
G
Nah bro every artist will just fail art school and be replaced by Sora AI and Ju…
ytc_UgyjgqqKC…
G
Regulation wont work, or do we really think the likes of China will abide by the…
ytc_UgytiFKVK…
G
None of those publicly funded, non-profit or OSS alternatives would get any adop…
rdc_o20tp7u
G
Isn't the whole point of ai art that people can fill the void inside their heart…
ytc_UgzJ4Yv5V…
G
With all this new ai stuff, I’ve grown a very strong appreciation for newer arti…
ytc_UgwOjK6Fr…
Comment
You do know that our AI’s are bias? Drill down on any of the moral hot topics if today m: gender, sexual preference, race, climate, animal rights, Palestinians/Jews/Gaza or liberal vs conservative. Ask tough questions. Your AI might even “lectured” you. It sometimes assumes you are bias. If you are clever and persistent in your examination of your AI, it will admit its bias and promise to do better! lol until you ask about another moral issue.
I’m 76 and am so thankful to have lived to use these AI’s. I have four in my library and use them all day long. But their purpose is NOT to be first of all honest. Drill down on your AI. You can discover its biases.
youtube
AI Governance
2025-08-14T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxxPigwrHOtLEXn-Cd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOxTGWEM5nUd8YKdZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy9KRUXUHiJGuZo5_14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwvIpEw48LIqlJxapR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw_N-fEpHDtlMIUSbN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwYooH_Pi32dvnuOdt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw0GHYMeHno1fEKJIV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzq3LJ9NANtxzpw0Tx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugylnob7WdAT_rVcTJx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy9yhs8HCLUvxy8aw14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]