Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we’re most likely in a “simulation,” haven’t we already lost the AI safety fight?
youtube AI Governance 2025-10-15T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyHY6eXInK6bVkQeEZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1W_kYcKbVAngchbt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzyyc8gLrVjEu3m-bh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugws5dV35DjlyOLvjo14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGQ_dQOAhaGrlv2854AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyWxomhLM3A_8IfWpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxdLIy8x689YyXbkrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxvQg2g_BkR2eSpULl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxTC-izm8InkquETth4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzDr2pmfYjkt6ZVedB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]