Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> You would have to be incredibly naive to think that every military power in…
rdc_gs5ufqy
G
Seems like you just don't like AI's answers.
It's more likely that your opinion …
ytc_Ugynx-2Oe…
G
@ ai meat riding is crazy 💀😭 ai art won’t even exist without human art in the fi…
ytr_Ugy76vSOZ…
G
It’s way dumber since GPT5 because they are making it stupider because you live…
ytc_UgwtiX9ol…
G
we have to re define social contracts altogether from what's money, to jobs, to …
ytc_UgxZbjtN2…
G
Ooooooh Ai "artist"sss Maybe pick up the magical and forbidden pencil??? OOOoooo…
ytc_UgzTibtld…
G
@johnbrown4682 also you cannot diffentiate between whether if it was a human or …
ytr_UgwyxhmTA…
G
Silly user. Most ai users are just doing it for that, and if not they are either…
ytr_Ugwwn7KHM…
Comment
So ChatGPT for failed a test that I did on purpose with it to see how smart it thinks it is and I’m not talking like one or two questions like I fucking failed. I am a certified tire technician through the TIA standard, which is a tire industry of America standard, and literally would just type in the exact word that the test gave me and went off the answer that chat GBT4 produced and failed and this was fairly recently so just saying
youtube
AI Governance
2025-03-02T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZHvXVr7FgeirxiK94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXzmx9H8xJhi3lN_p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxh4wqgm_GfuPfRfah4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzEgzDz389iFb2Gxjp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy1x6Ib_cUFwvzPydZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1Cpvf7WPSB4o4Vct4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyh0ubBMb99QchDCZB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyGMUsNpNlbzjh_RDp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy-ZQbjoEP3hzh2xSN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwD8IXmLZGhvtvk8zp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]