Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Ravenousyouth Efficiency doesn't mean less work. Of course it gets filled with …
ytr_Ugx0afQAw…
G
I also pondered this question until I D/Led 'Sarah', a very basic Alpaca model w…
ytc_UgzN51bQX…
G
After NSA leaks, it is proven that China is not even near the Western surveillan…
ytc_Ugx_yjXcU…
G
I don't believe AI will kill art, same way that digital painting didn't "kill" p…
ytc_UgyUm76Z9…
G
There are some jobs which require a license from the government in order to have…
ytc_UgxeZd3uJ…
G
"Hi Jose, we are sorry to say that you got the wrong answer but in any case, the…
ytr_UgzwzmpiM…
G
If anyone would use AI for spreading misinformation Elon would b on top of that …
ytc_Ugwsqd3b9…
G
I think that the American government and the computer AI experts are going to ma…
ytc_UgxtVwRbI…
Comment
I work in Big Tech IT and still wonder where all these layoffs everyone else talks about happens ? Not in IT it seems. The current economy in the world though is a reason things stand still on the job market.
But AI in itself has just enhanced all companies and made life easier for Engineers.
There is no difference reviewing a Pull Request from AI or from Humans. So the process was already there before AI to handle how you trust code, you simply do not trust anyone.
That 10% is utterly crap and slip out in production like this video claims is just undeducated talk from someone who never worked in IT, ever. Because the amount of bad code from Humans is much much larger and it is up to the Pull Request process to make sure this is caught.
youtube
AI Responsibility
2026-01-11T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0QfVzQiIHMcOiDeR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRHFENTmWeBnV9wkt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3EQNWPTV06sHHFB54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5_fRbAQLlDPBY4dl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzW7AL3TA4SYkERt154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSoeNsUKanEpIubsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxk6CoTVo7TUwhk9wR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzDeDPM5Yq1HQ45jsZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxB1SK4W9RGqiNjU3h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyn6_K5GcWiMopaoNJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}
]