Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
copilot or any of those paid tools are great for quick tasks if you’re feeling l…
ytc_UgzLyKxQS…
G
I’ve never heard a compelling reason as to why self driving cars should be a thi…
ytc_UgwBvnVLS…
G
I'll tell you where humans will be when AI takes over...obese blobs floating abo…
ytc_UgzVWNPJq…
G
It is obvious that automatic motion detector in the uber car did not work and ki…
ytc_UgyGwiEiO…
G
How is OpenAi “one of the most powerful corporations” ? No profit , barely reven…
ytc_UgzpKunon…
G
I'm sure there will be some impact, but it seems like the homeowner class is als…
rdc_gkqa2mi
G
Maternal death rate is at niveau with Russian federation and all Balkan states h…
rdc_dcxerw5
G
This isn't black and white. Your comment requires more nuance. The AI was progr…
ytr_UgyCU4-Pa…
Comment
The issue at heart is accountability and liability for when AI doesn't deliver expected returns.
If companies implement AI for as many jobs as possible, then in theory (and pushed to its limits), we could just have the CEO oversee the AI VPs that check on AI Directors, that manage AI managers, and so on (or just one AI that oversees everything, if you prefer). However, this creates the issue about having the CEO be the one responsible -- the one be liable for everything. Someone has to be accountable when the AI doesn't deliver as promised. But if you keep the VP of Operations, then that's a buffer. The VP of Ops is interested in having a buffer too, so they create or keep the deputy director of Ops. And the same thing for every other executive area. Boards will also agree that this is necessary as the cost of replacing VPs, deputy directors, and maybe down to managers, by holding them accountable, is higher than the cost of replacing a supervisor or analyst who oversees the AI.
So the question is, are you in a position that supervises and responds for the results of an AI solution? If you are, and depending on how complex the solution is, then that will determine how likely you are to remain in that position.
My two cents.
youtube
AI Jobs
2026-02-24T20:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwBS7UdMtu0yICkqNJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwevxEc64EA9CXhd1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyiqhWhsSfCMAJtNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-yo_rkJq9euG3jAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugylpb8auxiwfYGoYH94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwt-0ZzjBoXqq3N2BN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLRScyDbmWmXkseAx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOlBcXwkQb0rd7nwh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyTHoM1Twk1x1qqxfF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwETB4fe_wqGfMrY114AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]