Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing that you did not touch upon here is the increasing token requirement by new models. In my experience, Opus 4.6 running in an agent teams format consumes tokens at an insane rate. At what point will this token cost reach a point where it is no longer “cheaper” to leverage AI for specific tasks?
youtube AI Jobs 2026-02-24T18:5… ♥ 6
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxM-tGCf2vToDWaSRl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyzoYXDu1UvAHWJ8dN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwovtkyAlG2f1F_7354AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzyAvQvYPCox55yDTx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyGAjy1OxvqhWcZV7p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyzPKbWHOA70q313Kl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKo8irKuvoCrOmgpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyL6A77P_mcP1-HAEZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzpAKtcm8SzpG28HrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgySZFz0st0lXnRkYXV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]