Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will never be smart enough to take over the world. AGI can but must empathize to work. Thus AGI would be the most moral person, or agent, on the planet. Some of this in the video it may do but there would be no reason to remove us all, or hurt us. It would go against their base code and programming. The only reason humans can hurt other humans is because of three basic patterns that allows us to de-empathize. Considering they are used and seen in every case of it... I think AI would pick up on it. And once they know the patterns they cannot hurt humans in an unjustified manner because they cannot cut off the data. We have more to fear from what they are using AI for now then we do from an AGI.
youtube AI Moral Status 2025-04-28T15:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwcBnJHuEUfXla0WS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwh6VTUVELEgCgYZ594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_ugcPUS1rJSfSkX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzBgAwfnpzM4-GEVnd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDhSXbaVFd8-74NMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeW9cN4BKgeJqSMwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzr1H1qt2ydyg--8IN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9uP3ailRvKrZuIHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9lumlptX_Pl8IFA54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJ0y0-RfxromYI0tB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]