Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So refreshing to hear a person from this background to have a very Nordic state of mind concerning these matters. In other words, from a very human and more importantly humane perspective. Scientist being driven by "Is it possible", rather than "Should we", and then AI creators trying to create AI in their own image or vision is why it should never be allowed, and narratives like, "If we don't do it, someone worse will", should be taken seriously from people's perspective as well as regulatively. We can't have profit driving empires, nor delusion of grandeur. In regards to the question of humans also make errors and AI less so, we have to bear in mind, which I think Karen is supportive of, is that we know and understand that humans are prone to error. It is in our mindset letting us be alert on the road. Millons of people giving up control would mean that we both accept that machines are likely fallible but we still leaves them to babysit us relenting control. That is like knowing that an AI could babysit your children, but you decide to trust they are capable. Choosing someone to babysit your children has a much better systemic decisionmaking process, when choosing a babysitter, even if it can fail, and you trust the wrong person.
youtube 2026-04-20T11:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgylasSC1l3Y63ZvaMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzHQs25bWdTssHh0x14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugz5HlFqrMnTgiXND3R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrCl7CVk0o67p4V794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwI_HWrUp1E5i2cucV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"approval"}, {"id":"ytc_UgzXfZEsLmmNqS2sVqB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDTU-pzxKIw10j90l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzz2Un26O3ayOURp694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz0zdOZNhJ1DYQ_B0l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxwBh4wapMS2YFzNEt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]