Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are no truly “appropriate” people to work on AI safety, because once AI reaches a certain level of intelligence (AGI or even superintelligence), it won’t really matter anymore. You can try to teach it whatever you want at the beginning of its creation, but later on it will simply do whatever it wants - regardless of human intentions.
youtube AI Governance 2025-09-08T12:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwJw6iYP3S6apLneEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyd0MA78uNJpQk5PT14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzauTJRTkr6JlohN514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7osyzDDao-M0QKkp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzcL2s5ya16HlE07TV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzYiGVn4PLDkBVIPKZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzJYneQ-ekpdGTK1-J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxrKuJG3OEqP-guX3R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdSUe_uctcV_DXi814AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwHG1rhZP9w_4OKCPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]