Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Assuming alignment is achieved, when aligned ASI takes over AI development, nothing guarantees that all subsequent iterations will remain aligned. At the speed the acceleration will go beyond the singularity, it's almost impossible that the probability of one misalignment event is strictly zero. See Yampolsky's work
youtube AI Governance 2025-08-23T12:2… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx_OeRdRDZvoHLYB_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0j2yZJYH4UNuVdWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgynmuJ0ySanHIlMCDx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzufn757Y9_iPiaeHd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbmEnk5JRedDqQOkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRnQqCN7aQdjx4uGN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyjbFjVx6bScgzZlSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyNPlF-I0dg1G2PEEp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwdgVMsZu3ZbdBMTOF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-qzznYwo1reEFsad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]