Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As for alignment: won't happen. The best bet is actually keep AGI in a cage. You can't align another human being, so what makes you think an AGI can be aligned. The only thing I can think of is making sure the AGI understands that coexisting is the best bet. The best alignment is probably a main goal that can be reached with humans and an AI - space exploration and research of reality.
youtube AI Moral Status 2023-08-21T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgybgS0cKgXaGJXPBix4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkZQUSo5ZZlgIABfp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJj3ZZ8yHR_xhhKVF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzp7V_h-R4cs5yDRb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwuZZGRHAYDivEiL814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmvE53uvgZbfpMWDl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNcb1WAT_bl6a6eX54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw9GszBCYSq6CQWs3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxtcutWbgibhDBSftp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyd9BCUqhRZGde9bet4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]