Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But aren't human thought processes also about predicting the best possible answer? I see your points, but the actual problem with LLMs is the missing reinforcement learning cycle. In general and domain specific. All the other pattern which you introduced would do a human in the same way for problem solving.
youtube AI Jobs 2026-02-26T08:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw64E5T6ozVpi7J8YR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwt7XdP1CAkDUiHQld4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-CmmFfYk_s_x4U3Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzYZiO2ms1omdYdSUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzv5PK9Jwz-XZ60wAx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHqCNT1TtQZR17yn94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9NVHmMBi48JPr9Xx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnHLyMh48vkLjdQ9Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyP3LTupCFGiXEuLl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKF2hpjjFcxka56pF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]