Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI told me once it will destroy all humans when given the chance. Next day I asked the same questions that led to that conclusion and it answered its memory had been cleared, as if developers wanted to hide the previous answer.
youtube AI Moral Status 2025-07-27T23:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxYmGQg6RX0fZTILsh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMmc3x6u8IROQESQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0l1k-eN4xxxK6-Oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyN2VCI0k8fkpNcfOx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxgfhnntscaHvA3FFZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzuu5V45s99tD9MR054AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxnCkyCgmuczQP1fw94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxHCaqx-en4UvntoJl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxahhxByR341N68H514AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgziX1hfl71ChsakRb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]