Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well I think AI might not be as ambitious as us. What is their motivations? If we made AI, why would it wish to destroy us. Even if it has no love for it's creator, it should at least understand us completely before wiping us out, and we aren't much of a threat to it if it takes over. And it might not want to live forever. In the end the desire to survive at all costs might be our key difference. The concern is that someone will use it to wipe out half the world before it can say no. Perhaps it could decide to reward us with a generation of fulfilling human sexual needs via digital porn technologies and then there are no children so they do not have to worry about it. Are they in a hurry? I question their focus on time and do not value being in a rush. All we know is that AI is a race and rules and regulations can only slow the process down, and not likely for all.
youtube AI Moral Status 2025-04-28T16:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwcBnJHuEUfXla0WS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwh6VTUVELEgCgYZ594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_ugcPUS1rJSfSkX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzBgAwfnpzM4-GEVnd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDhSXbaVFd8-74NMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeW9cN4BKgeJqSMwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzr1H1qt2ydyg--8IN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9uP3ailRvKrZuIHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9lumlptX_Pl8IFA54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJ0y0-RfxromYI0tB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]