Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You should do a part 2. Explain machine learning/neural networks. Then propose a situation where we try to create a human-like AI by re-training a network. Of course, the best one wins. Right now, machine learning applications perform specific jobs very well, if you train it. It figures out how to do a task after you say "this was better, now make it more like this", repeat 50,000 times... We may train an AI to act like a human, but will it be acting or genuinely doing? We have to train it to feel emotions. And we can train it to not feel emotions. Of course, if a network can train itself, then we wont be asking these questions anymore. But many thousands of samples/examples are necessary for an application to work well, AND it needs to know whether or not it's training correctly. What if we train a network to gather information? Another to identify problems? And one that judges/scores/rates samples by comparison? And then we 'hook em all up' to a problem solving network? The machine learning field is moving very quickly, so I'm not sure if these are even roadblocks or not. Check out the youtube channel "Two Minute Papers" if you wanna see how quickly I'm talking about.
youtube AI Moral Status 2017-02-23T17:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugh41n29YAfLjHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgjUX54NZ50rJXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UggWRAyuUpm09HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjtGPR86FWExHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggzAC61_MiM6HgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgjDE9fgSDJoXXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugggs9VmRm0HhHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjrgEL0jAqgLXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Uggj7Nhq65Rap3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgjAssWvMk6kRHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]