Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
32:31 security researchers have repeatedly demonstrated the gross inadequacy of existing safety measures, such as RLHF (Reinforcement Learning from Human Feedback). Countless primitive tricks like asking the same question in a foreign language or via ASCII art have circumvented the paper-thin defenses of LLMs.
youtube AI Governance 2024-03-11T17:2… ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy2Pui57718Xirg0Mh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC4_iAYUS40Pt_SwB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFczuHq5bK3LnIBy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwkDEEEafWsJCSQ31Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzeV6MD8EU_W16CIMV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwc7xTogMyFG692MTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwooQ7Gd6f5-o7D11N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxN8aydVMYRoXlzyJZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzaaCdAn2ghHE5w6EZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHF19HipSpo2KBSXt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]