Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I found the solution: we create a virus to destroy AI and humans should be armed with this before we develop AGI. In the event that the system attempts to destroy humans, we destroy them.
youtube AI Governance 2025-09-06T09:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxm4P1Jl6mb5MGkCIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwZ2FkKVbi9TqNJ48h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3kqy5LzZ5HCWrvEd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw3c13TVOXA5BFHxf54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzD7GQbmsOiIfYT43V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxkKRMfLGXJENhEDcF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJPVW2tx3hwUPx9GR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPzULdhU6xXqLkllV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzjeXy_FOYeNfDBc694AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxyKjMKjX0CdxFCYqR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]