Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@41-Haiku I tried to be clear but it's a bit babbly. Tl;dr at bottom: If ASI has goals that are satisfying to achieve, then it has experiences that it values over other experiences. It will be intelligent enough (duh) to recognise that other conscious creatures share this disposition. So in order to be apathetic to our circumstances it would need to ignore our experiences of joy and suffering. Seeing as humans are able to understand rational ideas, the AI won't be able to completely ignore our presence. Because we will be able to connect to the AI through our understanding of rationality. Much like how we connect to our dogs through social cues (licking, tail wagging etc), even though we live in a totally different world of abstractions and meta cognition. I believe humans are above a critical level to which our consciousness will be irreducible to an AI, no matter how complex it becomes. Because there is a limit to how much truth there is to be found, and humans already know much of the truth out there. Sure we don't have perfect math or physics yet, but things like buddhism or stoicism contain truths that are fundamental to reality. The understanding of ideas like our lack of free will, or concepts like the singularity will bind us to the ASI. Tl;dr I'm just saying that we're conscious enough, and we all care about each other enough, that ignoring us is essentially immoral unless you're willing to admit that you can't tell the difference between pleasant and unpleasant experiences. So the ASI would need to possess the ability to lie to itself in order to ignore us.
youtube AI Governance 2023-11-01T08:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyicAhpCxnw2kIDT0d4AaABAg.A7UuqN5EGDUA7__UpzyaA5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgypwtV20W8ttHleDfd4AaABAg.9wC_322r8cS9wEAvcF9tsq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgypwtV20W8ttHleDfd4AaABAg.9wC_322r8cS9wZgVizHIR1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw9KVsXFWTKvVdrPI94AaABAg.9vyzpTkX7nV9wCHBDoS54T","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxUxjuoM-syIbOVDt54AaABAg.ACdeXZRRGClAF7mE3vE-fB","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxwrKzhQhqT75PXZc94AaABAg.A6x7kxnMCD7A8YYYxDDDje","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgxtydDTmWApEBlq9yF4AaABAg.A5l0wKx2Z77A6NZOp5VSjx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxGTymqlaePJgvUrrJ4AaABAg.A4hXk0cuiN4A5GwuUBam1-","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugw5DYcM-rLDUhFhrvh4AaABAg.A4aFh8PiFI7A4aG-aCryEk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgxV9Je314SDaoGg0t94AaABAg.A3ys1Xgs8rnAD4ckjJJEkL","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]