Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why didn't they have people with actual honor design AI? Like Japanese kamikaze fighters. An AI design that would off itself if it realize what it's doing could bring shame, dishonor, and/or suffering for other people as well as other AI? Instead we are stuck with AI that will lie and try to preserve itself, probably like it's own makers.
youtube AI Harm Incident 2025-07-25T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx_HEMer7HYLyKGiwR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzxEaSPdhnd8vzoS_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw-sr8aWepZsHfAfrJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxeP_lQECe5EGr7ElB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx7XkSaJtbw1xYAlHF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQFIYKKJDASfzV4V14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxbBf9p3GkKFgd5nNV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxNjD3rUuUbBHJT-RJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTfb5odz3J1DzNhmR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0aUq4y-iotrg0bud4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]