Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I recognize my ignorance towards this topic. I wonder if we could come up with something that can "attach" LLM by altering its code to the point that it dismantles it's foundation. Like a failsafe trigger that will take down any LLM that gets in touch with.
youtube AI Governance 2025-09-08T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxv3IrsPaUE9nh4ht54AaABAg","responsibility":"elite","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8O0veniKSI4eferZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx4EqclHLIQKfPKwwx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyGkoX9UfiEGypctYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzIe80JAbcoEKIl27R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzXKCYZqsDwRw2-wBh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwHBu-8qwDqXQKNPZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEvHvMCdYK9vRJnUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSt7HNlfaUCw553I14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy553FhxvPqD_nDO2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]