Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is, AI has to become intelligent, but also needs to figure out how to match our mobility, and compatibility, and passion. If it is intelligent, it also follows that it will be able to make its own decisions because a thing that doesn't make up its own mind about a thing isn't intelligent. So will it continue to do what we told it? Likely not. Will it become evil and not care about anything other than itself? Would that be "intelligent"? I dont think so...
youtube AI Moral Status 2025-10-30T23:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxgrK6C2Uao6798G7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrYwQ_ZYtGkegqHtV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw-_boNT2UHH-KKDep4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxtpzWAN0_e8eE9p-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4EJsMOUikWacNTml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxF4bXUctfpg4nSK9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyN9kO7i9XbC_VyJI14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzaOS5tyiTeC6YSXLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9FH0P2EV96FON3Yx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIkkQde0j9HOJ2gU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})