Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach.
youtube 2025-11-23T01:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugx-j3UQyQL3NNUTF-l4AaABAg.APqtN_sWHsRAPquHgiBZ17","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytr_UgwFUk2xH-QGN15xhzV4AaABAg.APqt0wiq_amAPrtqiJf3KJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwFUk2xH-QGN15xhzV4AaABAg.APqt0wiq_amAPsCMi61fNw","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugz2eTHSidCkpRiSPPF4AaABAg.APqsGncXxggAPr2Nfg4sup","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwzDTkQzQ7mVmFSR6R4AaABAg.APqrkuhXz4QAPrI3HfYt7f","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyxjpSWBD6pdXN5EG94AaABAg.APqrgeFw6AeAPrNqUxP0bp","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxcwOABtX2-4jUH_Bp4AaABAg.APqr_KtNd7vAPrzW9PwUU-","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugw4EBqxWFMeDrPU8X54AaABAg.APqr76o7jidAPr1Hx_aGlW","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgxGWYH9DrHonS0Yawh4AaABAg.APqpgbzGKwiAPqptI1yRtH","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwS298Y0sit5EwnWud4AaABAg.APqpd_ziwvyAPrHcWSqh8R","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]