Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
TBH I don't understand this junior vs senior division. For most jobs it is simply not possible to come up with a prompt good enough to specify an AI what to do. Let's take the programmer role. Surely enough, an AI can win a competitive programming contest of some sort. But that's because the solution to the challenges are a small but very clever pieces of code to produce from scratch. When you work in a tech company, most of the time your task is not to come up with a small but clever piece of code that independent from everything else. Often you work in a huge code base, with tons of teams and interdependencies,. The software works well, but under certain conditions, performances are crumbling. How the hell can you specify that to an AI. Most of the job IS to identify, qualify and specify the problem and how to solve it. Once you have done that, the piece of code you have to write might be simple. In other cases that's a big piece of the architecture that has to be recoded. Sometimes you need to access customer data to identify the problem. These data can be very sensitive, so you need to acquire tons of authorizations before accessing and analyzing them. Again how can you do that with an AI. How can you even train an AI to do that, that super specific to your company. You also will have to convince your customer to let an AI manipulate their data, knowing the non controllable way AI behave !
youtube Viral AI Reaction 2026-01-06T14:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy23qcI_VKcSMNc_7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgybXAVtZV-AAW0mGcZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugwckec4IPE-vzUMnEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzETI22O969ThXH51l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwJj1Aym_INMkKjJg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyv9MbZmsPgBwU_IXx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzsfquw0nFjLuk4m-d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyOL4q-0zcifc1L9W14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhVvhAOzQlAcfjtHh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyV7GxwTPMF8M6xG8J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]