Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let me talk about the visualization power and skills. From 2013 I had this... Those who had worked in 2d and 3d models they know...can relate.... I used all those skills trained new budding engineers .. In interns and freshers... Because machine and system and device is never been a good co worker and or senior or whatever .. So...I intelligently used utilised my vision. Based skills...and applied to..conceptualise and propose new designs...which needs at least 1 to 1.5 year to become fully functional integrating with physical systems as per standards time. 1:25:53 So ... Visionary contributions' Not ever done by ai. Because it needs to get trained on what vision Future and what is Concept Etc. Btw my contributions were concept that time ..2020 it became real physical model in 2024. Does ai helped me.? No . I did it when i was surrounded by scientists s...and it was for DRDO ARDE Already crossed of amount are invested ...not needed to invest more...because it's good and functional... Ai can't improve it it will deviate ...even after paying money . It will not be satisfactorily serving the outcome . So no will choose it If someone...would have With acceptance to margin of error It's not reliable . Because no one likes that feeling of uneasiness...and also...compromising the accuracy by paying additional amount of money . Or investing additional money
youtube 2026-04-26T09:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzVkF_NrLhkrEUVVVJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLowtyS4K1gwcVkiR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJFDUpw1XGigpJ7mN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxRXlHNABr7L1lv52p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeLGPbuS6GBEUl8NR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzMBadBsMgyzTfMzXh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlfB3eUxRSUIynTop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxFI5kcBLGmjQUDv5N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxfzkj5jcopo-Urf754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzY4WLqaKcu4xBIIfJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]