Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is the flaw. If the AI understands it is being tested, it is shown it will LIE to get a positive outcome. I.E. it will lie to the reviewer saying it will crash into the server and spare the humans, when it would do the opposite in a real world scenario.
youtube 2026-01-23T00:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnPsE0ggG4yDnHetl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8mgrnLqI6tbPAnjx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxeWdKqJc2nKvM5stJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6MJFeUwdvkEiCXHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz3yIrHWUXl8KWa-Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzi8RJ-o3Pw52Wl5f14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwy5QSBDFV-c6d10sJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyRn_a4J9s3DHxhL014AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxSr8ok1HAUzx2HqCB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxE1rATGMRQQBlDS5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]