Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
4:55 This is getting close to The Big Question(s), in my opinion. Today we have LLMs. LLMs are not intelligent or self aware. So two things that come to mind for me are: 1. Will today's AI actually be the technology that leads to actual "Intelligent" machines (in other words, "will what got us here, get us there?") 2. How long will that take? I think about electronics, for example. Many of today's electronics were first discovered/theorized/implemented in the 1800s to early 1900s, but weren't properly produced and deployed into the world until the 1960s to 1970s, I believe mostly because of the solid-state transistor. That was the quantum leap that led to a whole new way of doing things. It was a proper technological revolution. So the transistor of the AI world... we may not hit it for 100 more years. Or ever. Or we may hit it in 5 years. I wonder if actual AI experts have more insight on this topic.
youtube AI Moral Status 2025-10-30T21:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzHCH_7D3Io1A9ZfUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydK4YU0WvkkXDhLZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyLW75ItQyohqOU8-x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyi3pryPPZ16W5-jrN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyAcSPetC-PdFpwvhx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyCbY8TYZcio_FCw7B4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxqV2VekvkpMAdPBXd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwR5aqfElxaSpKXGOl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOXNQrSMo9rDaxXcJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz35HnxfBiL56aUr4J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]