Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Tech bros act like we've invented a prefrontal cortex when really we've invented a Broca-Wernicke's area. Btw, AI doesn't need to be intelligent or conscious in order to be dangerous at a human extinction level because current LLM's, if jailbreaked, or if you download a unfiltered one off the internet, could tell you how to make DIY anthrax or ebola which you could release in the water supply. Imagine if some selfish idiot made a highly infectious, highly latent, highly deadly version of a virus because an LLM told them how.
youtube AI Governance 2025-08-26T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxdpJSeUtp8sr5d2fN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgywI0G0wPgDP4Xn1hl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfZVoQhKmdpd2fT0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzvy0KFcG-q21C3ZJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxB0c3uFNFfd8TXkKV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]