Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have tested copilot and google's Ai gimini or something like that. And copilot won every single aspect... And yes when given a long and complex tasks it would do many mistakes... One way to go around that is by breaking complex tasks into many smaller ones. You really need to talk to it as if it was a child. For example if i wanted it to count 1 to 10 but skip 4 and 8. I'd have to declare my main idea such is count 1 to 10. And then specify the extra tasks ignore 4 and 8 so i would say in one line " count one to ten. But skip 4 and 8 " Or i would start with the main task which is count 1 to 10 and send and when done replying i would add now repeat and skip 4 and 8 .... And so on But if i would have just thrown an old code i had previously which needed much extra work and ask it to fix it all up and add this and add that while at it. That won't work and that ba5tared won't tell you if there was a mistake or an error while executing you requests.. It would just say: Certainly here is your BS you gave me with an extra BS i added... 🤣🤣🤣 It could get a pain in the a55 to carefully and precisely tell it what to do with what exact order... But ince you get a grip of how to do it.. You will know how to make more out of it by working some stuff yourself and giving other to copilot
youtube AI Jobs 2024-06-29T15:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxleJu-6sLkD3Le33p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugza-coFWH-orgiJhsp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydOMp2XYjxU-D-wPt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxnJ5QJ0-U40p3W0AR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwaBosly0lUwmb3vXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzfzmazo6pLdVQ9qpF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEKcnlCI_ggQysJCZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqjIbs_3_LBJaYGAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdTdpd2451Dxf3VCB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgysmkfczKxj56uYWQZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]