Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI does not normally ask for clarifications. It just executes according to the prompt. If it knows what it did was catastrophic. It should be asking the user before apologizing!!!
youtube AI Jobs 2026-02-04T18:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwRIXSAAkxrNOavfHl4AaABAg.ASppDNZKkA7ASz0XwKAwoU","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy0X6JBDYCGfzt7viF4AaABAg.ASpk5zDGeKSAT-4NKFnq1J","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugy0X6JBDYCGfzt7viF4AaABAg.ASpk5zDGeKSAT-cSS3If9h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx00rvQDGA9LNdqI-N4AaABAg.ASpdLX2a153ASphHmdq200","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxZfId7HaSOyygR4Ox4AaABAg.ASpWCJrYQZEASr4WpYCTiU","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw0Y9CpbyAKd9Qi8Z94AaABAg.ASpMITP-3aUASqHh74qUSw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyeyE6o39TqZJpByIh4AaABAg.ASp5y25AwUQASrLPfkclJU","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyeyE6o39TqZJpByIh4AaABAg.ASp5y25AwUQATjLdKGd1YI","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugxj00QolrBly7fdSgF4AaABAg.ASoedp5QFkiASofWM5Z6ZU","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwQ6nvZc5w9djm_NTR4AaABAg.ASoe3tHxo5nAT30QYYV6pS","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]