Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
godlike ai is basically someone who did everything so perfectly that nothing in …
ytc_Ugzl9sNcZ…
G
> And the idea of a “robot tax” has garnered considerable support, including fro…
ytc_UgySG_2sa…
G
AI isn’t evil; it’s indifferen and indifference plus power plus bad incentives c…
ytc_UgyLnHTzs…
G
Since ChatGPT has already collected large volumes of information, I wonder if it…
ytc_UgxbDwXIi…
G
Neither.
We'll enter into a fragmented age of despots and corporatism where goo…
rdc_dulen2m
G
Read James Maynard Keynes. We could and can be at a point for UBI in the U.S. in…
ytc_Ugx8NxoyM…
G
AI isn't drama, it is more than drama, it sucks, people lose they're jobs, and e…
ytc_UgyWgpJ66…
G
By 2027, there won’t be any AI security specialists left, and they’ll be out of …
ytc_UgwLko385…
Comment
Real humans see your shitty code. Real humans downvote your shitty code. Shitty code is ranked as bad code by AI and used to understand what not to do. AI improves significantly faster due to presence of negative examples.
youtube
AI Jobs
2026-02-13T02:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytr_Ugwy0v0fInzb8c9ZKxd4AaABAg.AOJLEdVJ85VAT94hRH33x8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_UgyxXeXbTk2fA1lWWPV4AaABAg.AOJKahn1CW6AOJWSLTZ9mp","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytr_UgyxXeXbTk2fA1lWWPV4AaABAg.AOJKahn1CW6APGl01QKONu","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},{"id":"ytr_UgxJtSMxKJvYwcwozz14AaABAg.AOJH0yKSP-KAQv3GYXYVN-","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytr_Ugzly8bmQj7x0PhHGKx4AaABAg.AOJF94Ay4yRAOJKpsCdZqn","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytr_Ugwv4_F-xNCEBI2iFJN4AaABAg.AUCWq1huj_eAUCjVN-OMet","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_UgzcKW90RFUinSLqb6J4AaABAg.AUBp785bEBbAUC0OteMS-3","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytr_UgxuzpdCPfUTkqrqeJ54AaABAg.AR4N0p5nkXPARNUkU7rTo0","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgzfyxpWE3zboiIJha54AaABAg.AQWmbs2YWPnAQ_E9n3f3bz","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytr_UgxeXNExXlseL9mEXsZ4AaABAg.APfhCrtWbiTAQD4zX8FPQv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]