Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's the thing. I'm not concerned about the tool itself. But just as the example was not as covered, but AI can. Also be used in net negative. I'll give an example if AI was great. Really great. I would love it. I actually because that means our life would be easier. But the assumption is that the same tool won't be used on purpose to clear oneself or a group or company of accountability when in actuality, especially if I can't actually see how the algorithm is set or I don't influence it. What not can I intentionally make it more difficult to have access? To something. Not AI, but an example would be like you, I, yes, you are can be great. We know enough. We know a lot about user behaviors to create great y, but we also know a lot about using behaviors to make dark or negative u. I not even considering like with AI now will be easier to track and find Anyone breaking contract. Or all of these sorts of things point is, if AI was really, really great. I'm not really scared of a tool because I'd be using it too I don't think it would just be for companies to use but the problem is I just naturally would not be the type to use it in such a manner whereas there's more than enough. I'm not gonna say a lot.I'm not gonna say a little just more than enough that we'll take it and use it in such a negative manner.And that is my concern , and this whole conversation , not this particular podcast , but the conversation I finally internet as if this tool is gonna come and take over and possess our lives or whatever is a straight up distraction of what the real concern is
youtube AI Moral Status 2025-08-07T22:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx30HbkDOYHJs9R94h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxSDa0_-ClWhzFk8GF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzJg8UsqccCCC8bglN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzkMj1ypFVyq-5g1Nh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqfqSI69hjeRzkvUt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWQuU4uzc1qxROLU94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz878AklwvOBOCGHqJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxl4NJ9tT4s93NIPfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzeyA0rLUdjGKCJucx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMCOpbgE_3W3pIBb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]