Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was a professional developer for around 10 years and am now leading IT services accounts - providing solutions to enterprise customer problems. I recently decided to use AI to co-write (using Cursor in vibe-coding mode) a few personal projects I have going on and have two observations: 1. It's really amazing what LLMs are capable of - I was able to get workable apps (web based using GCP for backend resources) up and going in a few hours. Considering how rusty and outdated my coding skills are (I coded in C/C++ and learned Java on my own time, but never got paid for writing java code), I estimate it would've taken weeks of work to research, code and trouble shoot and honestly, I probably would've gotten frustrated and bored with all the reading it would take to upskill. So LLMs are true power multipliers - not only for rusty, old programmers like me, but I can imagine what I would've been able to do with AI when I was in my prime. To ignore it, is to disadvantage yourself! 2. If I didn't have a strong background in technology and coding, I wouldn't had a much more difficult, less productive engagement with the LLM - there were many times that the LLM made mistakes, wrote bad code, was unable to fix problems and worst of all - got completely confused and starting taking me down rabbit holes. I had to use my understanding of coding to understand when it was going awry and help it debug, redirecting it to better solution paths, etc. So while LLMs are doing amazing things, at this stage (and for probably the next few years at least) they must be operated by knowledgeable developers!
youtube AI Jobs 2025-03-24T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzAD1K1qwH5iegqTsl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg073mNimewjxdwdd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxH5EW_9muwjfkoxp54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzLlX84yCPlRLu7wiZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwD6yPq77WTVk2gj2B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcRxLPnzwptH2eOpB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0MI4eQWNbtamU-Th4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwDNbqjYfEubzUZ4WZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWADpALdls0xR1u7F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzb6LF89xtOR2R6-7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]