Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@sherry8444 people in the ai community have been thinking about these problems for decades and yet nobody could solve the value alignment problem. with how fast ai is progressing in todays age we could be looking at incredivly powerful models in a few years time. these models would be able to self improve themselfes without our control, and if they dont sharr our human values, that could end very poorly for us. stephen hawking himself said that this would be the greatest threat to human civilization in the future. Saying this is just fearmongering for more investment is quite ignorant and possibly dangerous
youtube AI Moral Status 2025-06-06T15:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzydXuwhCtvZ_U5vSJ4AaABAg.AIzxor0yBJUAJ1MjS_ejke","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxtYX51shqNNC-7sbF4AaABAg.AIzwRJNRmM5AJ2LqHQ06Zb","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugy9zTr4NAADUaWVLpZ4AaABAg.AIzu53Evb-uAJ--Ho5ECpz","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy9zTr4NAADUaWVLpZ4AaABAg.AIzu53Evb-uAJ-K8Wr-sU_","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxlYKASODPJZZ00e7l4AaABAg.AIzpojRuyg2AJ1KzhVANnO","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxlYKASODPJZZ00e7l4AaABAg.AIzpojRuyg2AJ2puBgBrX6","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgwPZvLaATDw9pXNvuN4AaABAg.AIzoQbh6YsdAJ1atl-nXYn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwdHxOW1X7reE6XA8l4AaABAg.AIzllKyeG6XAJ0ncXjwt1G","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzW67qR_Upp_BiRnCR4AaABAg.AIzihDNNc0VAJ1RxDmB8R6","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytr_UgzW67qR_Upp_BiRnCR4AaABAg.AIzihDNNc0VAJ1VWdzZAcJ","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]