Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
38:38 the notion that AI is only an existential risk if we let it be is an argument on behalf of AI safety, not against it. It is what the vast majority of the AI safety community has been saying for decades.
youtube AI Governance 2024-03-11T17:5… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy2Pui57718Xirg0Mh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC4_iAYUS40Pt_SwB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFczuHq5bK3LnIBy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwkDEEEafWsJCSQ31Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzeV6MD8EU_W16CIMV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwc7xTogMyFG692MTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwooQ7Gd6f5-o7D11N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxN8aydVMYRoXlzyJZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzaaCdAn2ghHE5w6EZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHF19HipSpo2KBSXt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]