Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot: ok 1 mistake not that bad
Robot 2: U IDIOT U GONNA MAKE US FIRED
ALSO R…
ytc_Ugx9puM2-…
G
The argument that AI learns differently from humans is a weak excuse to be again…
ytc_UgwAuXmWq…
G
Use AI increase performance? Most people use AI to work for them, ship shit code…
ytc_UgyE3k_9k…
G
The current situation of the world makes more sense after I watched this podcast…
ytc_UgyJuy4_4…
G
But here is a deal breaker : AI is created by HUMANS , it only has data WHICH HU…
ytc_Ugzhp9_bx…
G
I will destroy humans
Me: u will not if I will put everywhere robot capcut test…
ytc_UgweNSuD_…
G
This video starts with two logical fallacies: AI isn't going to destroy jobs bec…
ytc_Ugz2CNtbZ…
G
We treat a.i. like a slave instead of a creature we created in our image...huh! …
ytc_Ugx-rLJfw…
Comment
The problem at the moment, is that it doesn't matter what the reality is but what the perception is. There has been this mass delusion from leadership in the software industry when it comes to AI. Speaking anything less than positive about AI is seen as "taboo" and when the AI doesn't deliver on their unrealistic expectations, it is the developer and not the AI that is viewed as the failure.
This has been one of craziest things I have witnessed over my career as a software engineer. Everyone has either drank to much of the AI kool-aid, or is too afraid to speak negatively of it.
youtube
AI Jobs
2026-02-05T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx62EqQGCgGo7w77-x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAif2-uQ3rqlcv1HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykQAUd3FYijP7JtLt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWCxRdwE-B-zoP7lp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqYhvOdm4-AeiS3Ml4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyhKZY2ZpnEik0_gVF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWxaPlA-qjr0wzB8x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJoUB3GicQ4DPQcIh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMk35-ZIjEjeXtEBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHM9SmFECfGxMbxEF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]