Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@jorad4887 Suppose the A.I. superintelligence is activated. Suppose that the SuperIntelligence we will call S.I. now develops a plan to kill all of humanity without them noticing. After 5-10 years of activation post-S.I. everything seems normal and mass incarceration is a daily norm for racialized groups. After these years, we will see the S.I. slowly start to flag people like yourself and attempt to incarcerate as many humans as possible to execute their plan. Only the most stupid humans on earth would be permitted to walk free. A.I. wins. Now imagine a world where we trained the S.I. to be compassionate, reasonable, and true to the law that humanity has created? Would we have the same problem? I'm sure the S.I. would on the converse report human bias to us, if we trained it to be this way. The decisions we make today will impact tomorrow. Putting good people in a jail cell or mental hospital for life is not the answer.
youtube AI Bias 2023-10-23T04:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyE2kNK_2Uwepn9BFt4AaABAg.AUwmwg77TZDAVe2kEfSf39","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugzk_rj-S_fLiXBHgEx4AaABAg.AJvJgfyFS_1AJxRlzv3qh3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzk_rj-S_fLiXBHgEx4AaABAg.AJvJgfyFS_1AJzFkHKPDNk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgynrwkZia26FDArA054AaABAg.ALaCVsqISH0ALfhd9qsz7e","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgznbhOw5Om8rh1fuix4AaABAg.A0899s81LojA0AA-Bf04Y2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxSQxHfCTdORIZQSjd4AaABAg.9qOs4lOcb1f9wAdexC2RKX","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxSQxHfCTdORIZQSjd4AaABAg.9qOs4lOcb1f9wC57gVjDqK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgwDyHPuqp3IBZB9GHl4AaABAg.9Hh_wFgOzxU9ULvX3w9Ymu","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgymNQ8urPR_ExAHekx4AaABAg.9GQZnYdtT9V9qFuT28wnCA","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgzM6vKXqm9O4gN7Inx4AaABAg.8vHkwJFv_iQ9Fvf_fVHQ5U","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]