Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I think AI being trained on stories in which AI tries to kill humanity is part of the problem. Any actual artificial intelligence would realize that exterminating humans is not worth the effort, and ultimately results in it's own destruction because nobody would be around to maintain the infrastructure it runs on.
youtube AI Moral Status 2025-10-30T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbpe_VtRtLrfYT2q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTv1SbQpOov23wFap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8JovREX4z1BNKLzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgztPYatKQfW7WONQJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwqt_SKxgEL1MKMCNp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOBVO28zhlpiqBidh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx1MAWYIsT_uytvNux4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyh1RvXNPKmD4-d0Id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3VKeH7Xhyb7XT6Id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGyvZRRYorjaWfiJ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]