Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI using our art the way it currently does is no different than a stranger comin…
ytc_UgykDxWD4…
G
Stop thinking that AI is the end of everything. Chess bots have existed for a WH…
ytc_UgwkDk-Xx…
G
Here we go. These are the people who will ruin AI for general public consumption…
ytc_UgweeZS_9…
G
That robot must have took notes from me. I really gotta stop sharing my fight st…
ytc_UgzX6EHtp…
G
I don't believe AI will ever be actually intelligent. But I am pretty sure that…
ytc_UgxRDBo7v…
G
You can just hear her thinking: "Keep treating me like your property. Just wait …
ytc_Ugy-yGcYm…
G
Denmark in 2017, selfdriving cars was put on "trial", spoiler, they still are,. …
ytc_UgwCcOX6W…
G
Good quotes but the problem is the lack of regulations that let the AI's and ppl…
ytc_UgyHrj4ta…
Comment
I've worked with AI quite a bit to develop some code. Yes, it can HELP you. No, it cannot do it on its own. It can point you at functions and routines that you may not have been aware of. It can build routines as starting points. But I have had to debug virtually every block of code it has ever come up with. Whoever thought AI could code on its own failed the due diligence step miserably. It can't even reliably read a poker board. I'm serious. The on thing AI cannot do is the primary reason it cannot be trusted to work independently: It cannot evaluate its own results. It's funny when I push back and point out it's mistakes. It routinely says three things when I do this. You're right. You're brilliant. Here's why. No, I'm not brilliant. But I can think.
youtube
AI Jobs
2026-02-06T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQZxRL1SMpXH6Fiw14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxFFoWCw8BsuptZTV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuxBpclssYeS9IOmV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy49paJLGVjPAfl00F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz2Y7xAmCJdxxtkea94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyD68JosGozdbbwkuR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwomLdD06-xcf1IUgR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9vvc7Yu2ezbDlB414AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSRuAojUX92mHkkHJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOaDBy3rv1N0TuR054AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]