Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Unknown. The concept is to take a new species that will be faster, smarter and vastly more capable than us and that is also capable of editing, modifying and upgrading its own code and data and then force those entities to always act in the benefit of humans or, at the very least, not to perform any actions that would be detrimental to humans. At present humans are not smart/wise enough to even be able to clearly and accurately state the issue without ambiguity (aka what does "harm" mean in minute detail). For instance an ASI taking an action that poisoned the atmosphere of Earth would be very bad and should not be allowed, but something akin to a human stubbing their toe is no big deal. If we get any even minor detail wrong humanity could end up with a very bad outcome. In order to protect humanity's continued existence and autonomy we need to either 1) solve AGI/ASI alignment or 2) stop AI progress before we invent AGI/ASI. If we create an unaligned AGI that is capable of autonomy and self improvement odds are humanity either goes extinct or suffers a really bad outcome. Anything else is just wishful thinking or sticking one's head in the sand and ignoring the potential negative consequences for terminal positivity, greed, personal profit, or whatever.
youtube AI Governance 2023-10-23T06:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyicAhpCxnw2kIDT0d4AaABAg.A7UuqN5EGDUA7__UpzyaA5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgypwtV20W8ttHleDfd4AaABAg.9wC_322r8cS9wEAvcF9tsq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgypwtV20W8ttHleDfd4AaABAg.9wC_322r8cS9wZgVizHIR1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw9KVsXFWTKvVdrPI94AaABAg.9vyzpTkX7nV9wCHBDoS54T","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxUxjuoM-syIbOVDt54AaABAg.ACdeXZRRGClAF7mE3vE-fB","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxwrKzhQhqT75PXZc94AaABAg.A6x7kxnMCD7A8YYYxDDDje","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgxtydDTmWApEBlq9yF4AaABAg.A5l0wKx2Z77A6NZOp5VSjx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxGTymqlaePJgvUrrJ4AaABAg.A4hXk0cuiN4A5GwuUBam1-","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugw5DYcM-rLDUhFhrvh4AaABAg.A4aFh8PiFI7A4aG-aCryEk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgxV9Je314SDaoGg0t94AaABAg.A3ys1Xgs8rnAD4ckjJJEkL","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]