Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“We want to create an AI smarter than humans and we'll check it's alignment by having humans look over its outputs.” Clearly nothing could go wrong.
youtube 2025-11-05T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwxaJZEm3juoZaNupV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzK7qsLJb6kzjQWrrl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOeiD0a0EEq7ar6W14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx35nPulgDLVdrX8fd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyijxX2cSWZBN6wDcF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzZONlSC3kMOhH381V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyX1tEn9VYp9PgfGzp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzI6Uxz8ri1be0xYhl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzZXdKNeRQp3M7QDvl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwhM1fYuARgHUmIVBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]