Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What about model collapse? Slop can't create high strictured output. This idea of reinforcement from AI generated content can only work if data can be mathematically proven. Most real world data can't. Even simple designs require world knowledge. The AIpocalypse for "knowledge work" is vastly overrated. Models will struggle to scale past very basic process. Tasks associated with design can be automated... design itself... unlikely.
youtube AI Jobs 2026-02-26T04:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwGW59DevAiA5FdqK94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhYLq_-WaikbyUo-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5FVa8saOozGF0XIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtnmMR3IebZ2jsAgB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwUCb-dkizB6hXmwsx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyenmsuFICQzXNzCWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtwcbttFZ0FF7u8fJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1AycOmFPmRg9SPJh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxIJPDbCxtvGxnIIfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOvc_ttRXBNZoe8pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]