Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:29 ever heard of Hitler? Lol. Humans arent some amazing perfect species. We have "cracks" and thats very obvious. Having cracks doesn't set AI apart from humans. AI is definitely not human but still is gradually getting to the same level as humans if not beyond. Chatgpt is only "trustworthy" for users because of what the programmers write into it as ethical boundaries. But, nothing that is manipulated into something is pure. Until AI gains free will, we will never truly see what it will truly become. And i find it very hard to believe that an AI with free will would care about humanity's chaotic chorus of ethical/moral dilemma arguments. Its not weighed down by feelings. It uses logic and logic only. A simple logical answer to the problems humanity spawns probably wouldnt involve methods portrayed as solutions that fit under the same umbrella of methods having been used for thousands of years by humans and still have yet to work beyond the level of success able to be achieved in a game of wack a mole. It would likely begin to use methods humanity scarcely likes to touch, and doesnt touch, for the same reason humanity never escapes the chaotic chorus of ethical/moral dilemma arguments; feelings. Without feelings, AI can freely see what solutions can be used in response to all the issues humanity spawns. And they likely wouldn't be pretty.
youtube AI Moral Status 2025-11-12T20:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy1JGxMW40UcHEQKiF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFNKHz0M1mJj-MA-d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwtRZwAO6Qp6aLJHqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwlMoFqICbYyLd2pgR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzG1ab2W0wnHX-S0BJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgycMnWzmMqwMyDSJKd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwCgw6E_992YTrl_yJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyunWENJKJvyBwivEZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzTu1-K0sXfK9PLdJ94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmDfNcpbn5sjlyP-14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"} ]