Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I used to think that artificial intelligence would end up enslaving us, but now I'm not sure. Frankly, much software doesn't work the way its supposed to. It's buggy, it crashes, it has errors. With different programmers programming buggy software into different robots, one would end up with many machines that make lots of mistakes and disagree with each other. I believe the robots will argue with themselves endlessly; not be organized enough to wipe us out.
youtube 2015-07-30T11:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UghIslpyA7guyXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjEdbJIUvtbBHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UghuKujNU5ka2ngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgiBiNeFv1hM1XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Uggf-Ur-bMSAKHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjCWiOOqeq5WXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgjEO-KtZ8_n73gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugir7WJY-kdIN3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgjvXdt72CGjS3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgiuggHy39wRkXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]