Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just shutdown my editor like 5s ago. I was done for the day...but I thought wait let Claude do it for me...thats the new norm right? So I figured I would let Anthropics model have some fun... Now I'm even more exhausted from wrestling with Claude because it can't convert a directory of JSONL files to parquet. 10 minutes later and it still can't get it done. I could have done the thing myself multiple times over. What are y'all doing, that I am not. It is just a function, a simple function at that. How is everyone letting these things write all their code when I can't get it to write one function correctly. It sucks at typing. It forgets try blocks, no logging, (no quality) if its not some baseline function, or antiquated library API, forget it. I want to get in to this, I am just not convinced. Sorry. Second building something from vibe coding is like rolling the dice, non-deterministic, the one who gets a lucky roll is called "skilled" or "cracked" , when really they are neither. And the fact that software developers are choosing "what" gets implemented is the scariest thing I have ever heard! What CEO is letting some grunt choose what direction their product is going, based on what?? Forget the forecasting, fire the business analysts, burn the WBS, we are just letting the Devs do it all. I don't think so.
youtube Viral AI Reaction 2026-03-05T20:5… ♥ 22
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyzdMFHbjXYBvvt2yB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy2s4W_dbkrGUTMpkJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzcx0z7agoN4LX1uC14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwmobvveJevu_FZpgR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-bLY_gZlCBGqxI394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw7cGrmnhuOU2qVNjB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwQa43nS1K4F-wsfgh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7BzkaiyqyYFGtBWN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxFRO4N66VwLBtro-N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5cx990gkpohcoEUJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]