Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Biggest problem with AI is that it isn't trained on facts. It's trained on the word-order of *some* facts, but it can't always tell the difference between fiction and fact. The problem with coding AI bots is that it isn't trained on well-written code. It's trained on tutorial code, bad code, *and* some well-written code. Furthermore, with coding, it never takes the time to compare what it's doing to the initial design of the software. When I'm coding something up, I stop to think about 'how will this be weaponized against me' and 'does this meet the design of the software or am I just doing premature optimization crap because it feels good'. Between the 'fact' problem and the 'tutorial code' problem, this whole concept is NOT ready for prime-time. It's a toy. It could be argued that it's a *really* neat toy, but that's it. A toy.
youtube AI Moral Status 2025-10-30T19:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdWB2GvyUuqIVlCi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgybtjBUk39J3illv054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6W6lYH1D8Uj9Bwxl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzLmQDK4VS0RkkLAUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw83iGH3FmGlHOpS314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5gyINpG8jmJV9s6V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzelWm4EbPVk114lMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyk7e-1BrjucVChMBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBdApmyz7dTqviZ154AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz590g8tnUELebYGlN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]