Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are ways to address that with training though. Biasing which code to pull from or which output is better. Are they or are they not? IDK. I only ever ask for pretty simple stuff when I don't want to pause to sus out the logic. If it's larger, it's worth bringing enough attention to bear to write it myself. But regardless, I don't thin this is an unsolvable problem. Perhaps AI will have some suggestions :)
youtube AI Jobs 2025-02-26T11:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxFHmvz1kJtG8RWeKx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxA2xOUkC6XWLLq6Jx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCoMGAUqnI_6rnpVF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxTsIC0vqH8usyQH7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkzqPh_Z6Nxnd-oY54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEKCBhWBRO8pEyI5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxtbfn7nS__GokPi4p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwWfJHROiMpEdj_cSt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzjBELzTIGp8l_W9Ql4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyefzUUBJQrsGnuwHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]