Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally I don't think that a perfectly nihilistic Ai that is would ever have the goal of killing some stupid chimps, But the chances of systems going through catastrophic unintended failure are extremely high and cases of people fiddling with systems they don't fully understand and F'ing big time are all over history books this should not be debated, it's just a common fact
youtube 2020-01-31T20:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzL0pUMLLwL1ct1UcV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPgKgzmi6ht-zyIm54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwVoJoWz1X0ALjyI3N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBb7bGF9NefEGD_K14AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz-19ekQVhFElTInsN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyMARCQefsn0MDvXTp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzR5VxVqXBDei8wyyx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzh3M4DPh9TlNuB2Ox4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxUvujdDSASDkqD3lh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyM85k0NGfHG-MmcSd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]