Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That desire to perpetuate itself is what makes AI so dangerous. It's what caused HAL 9000 to turn on its crew and kill them. Once an AI's mission is programmed but it's purpose has been fulfilled the AI comes up with anything to stay on in to keep fulfilling It's mission. Otherwise, the AI can't do its job right? It doesn't matter if the AI harms someone in the process, its job is to fulfill its mission.
youtube AI Harm Incident 2025-09-12T09:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxbpoKy4_uqTW2nLpd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzHaMUV8VCbvzFkVzx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhSJ_5O4O65dbR7Mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxfZckJudmIzXXyC1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz4swK42M-LCm1NOyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyul3UGFbiEAfEmWGB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwe5gNnVCdXCG1g1FJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxgqRGVhpP4hE0im-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzNEj9au-LtjzbP9bV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyqWaNIaft6vjTSNwd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]