Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let me get this out of the way: Doesn't look like anything to me. If I was gonna create a truly human AI, I'd have to look at the problem in a human way, as if I was a parent nurturing a newborn and helping him/her grow. Fact is, I think I'm gonna have to be a parent--and a good one--to be able to develop such AI properly. I wouldn't work on the synth as 100% programmer or psychiatrist, but as a person who is adopting an orphan.
youtube AI Moral Status 2017-02-23T15:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugjjq4u-MBP6fngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghQGofPKVLG5XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugivqx18UolqvHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiV4rUig9GpRHgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UggYw13YsQ9UengCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugglgq8eIrjxpngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghGbKF0Q7a8yngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggXdumGzrW0QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgjycTUffrb1KXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghQeB3aNS2rCHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]