Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The best (least awful?) effect of AI image generators, for me, has been that I h…
ytr_Ugyh4GvW1…
G
Both misalignment threats can be controlled already, but it wont happen. 1) Mak…
ytc_UgyPSz4Ax…
G
He claimed Google search doesn’t need to be regulated because Google search did …
ytc_UgyOXbYRx…
G
MSM tell long-form lies all the time. Changed the colour of Joe Rogan with "the …
ytc_UgyHbfD4h…
G
The ai agents for Amazon, banks, online retail etc are a time consuming nightmar…
ytc_UgwBCEodc…
G
So... just as a test...... can I get Ai to perform the most simple of basic jou…
ytc_Ugzh-Ar4s…
G
Considering that human drivers VASTLY outnumber self-driving vehicles, your “com…
ytr_UgxbJMkdG…
G
As a software developer, I can confirm that AI code generation is bad. Give it s…
ytc_UgyyH--Yc…
Comment
These AI models are not that capable. They do not have the ability to create things beyond the instructions that they are given. The majority of these bottles are purely language based they could give you a good start on a program, but that program is only gonna function at the most basic level. It takes real world experience in programming to create a program that covers so many dynamics. And because the AI models that were using are extremely limited, those programs are therefore limited.
youtube
AI Jobs
2026-02-09T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwB_FujTtz1FJ_QKAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzjCKBQnwNHxavIvAB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzAulysMiaHjzJWvs14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-gjCTHWI9SFuSQP54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzO1IhC0SkmRH2nhJt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxg-98Fx6y2XP9-WiR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwfojKoYnAZqIEy8TJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7cOBsZRQGMQJRmFR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyiekd1swAeKVlELYR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1rNAbtJGZDpLB93d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]