Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The way they're implementing the AI clearly indicates we're in a similar point of history, as when first nuclear weapons were invented. Otherwise they would create useful machines first, instead of going for AGI which eventually might kill us all. If this is how this works, then the AGI must be such extreme security risk, that even if it will be a problem on its own, not having it will certainly screw everyone failing to create it first.
youtube 2026-04-10T05:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwORS6bixDwiHYVHTh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgziuAj3YB0sX72yLNp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzu6fiRByg4qq8OcxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyrBG27yQhy8kc1VvN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxnk77FS-dRXH6D5Jt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2weUm7X3Eda8lzad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwwUchVUfyKBH9hzxJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwVoHNvAvQM5x5hR3h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugywdyc5RyFLcnW0hgR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw_o_qu9-sPiJ5eyFd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]