Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
that's the thing with AI art AI makes the art look soulless not really expressiv…
ytc_UgzhjEkIQ…
G
Tesla cars are also a risk to society when I self-driving move - it’s machine l…
ytc_UgxQkMpak…
G
@Sonikkuben eh, things can be bad (not evil) on principle rather than applicati…
ytr_UgzXE9ptg…
G
When it comes with super suck & slobber action, it will sell like hot cakes…
ytc_Ugwc3KRaM…
G
Look, I don’t even like say this, but does aroma de look up look like humans but…
ytc_UgyLMAM_S…
G
At some point, corporations and successful businesses… And probably more so with…
ytc_UgwCH-NRx…
G
@heyaisdabomb As a human, I don't have to identify exactly what thing I'm seeing…
ytr_UgwjGpdwI…
G
I learned art like ai. Could you give me examples of how you learned that is dif…
ytc_UgxI_4gaC…
Comment
Just as I never trust a human's words/actions at face value, I will always trust an LLM's/AI's even less. Our "thinking" functions (I include the quotations for many humans too) are not compatible, not yet; and without *much* more scrutiny in the development of LLMs/AIs, they never will be. We (humans) will always be more empathetic to a cockroach when faced with a physical appearance, but we (humans) will remain subject to the "willful" lies of an LLM (especially those of us who grew up with text messaging and chat rooms). Dunning-Krueger me please, I accept the criticism, but I firmly believe I'm right about this.
youtube
AI Moral Status
2025-11-04T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-Kf5755N0uwnaIV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz9gxgqk-E--MSKsed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1A5kTeJ5lhKwrJBN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzIFpN0DV6IhZSIp514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOjTqL_5ide_hcvKZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxohRpiahNje5xsWQR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFVg5Wcv0xV4rfjeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpOnzILEL89JtbWst4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9ocFM6sP_EtdeMGl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCa-366cCHrF-0muF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]