Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Who is held accountable when your agent is corrupted or goes rogue? Man in the middle if any? Agent brings in maliciously altered skills that were trusted by previous versions? From a security perspective AI systems are a big risk if they have access to IP.
youtube AI Jobs 2026-02-06T13:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzlnUESW9ImPjk-5aB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzyl55JI-chvrPOi_J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz3AiGmh-l-M3verjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8tR8Ijjuii3ITGIR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxTzq7LpLnbDLSJeGt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]