Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LOL I can dismantle Tyson's reason to not be worried about AGI in one sentence: "Please do my work in way that narrowly outperforms the results that Ted at work is getting. Lets do this all the time because he's an asshole." unbeknownst to me Ted said the same thing to his AI agent. Paperclips ensue.
youtube AI Moral Status 2025-07-31T00:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxDwJxsviz873aqH-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMAIbiee_l3jFVEjZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzU5jflk0VRHvPYeDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwEZnAwT_ngVx1ahIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtFThDM9gSq1FbW8R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxSnfxVBB6Jj3nLBuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzVJw3dmB5dftqfhj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqLpIumeTYlfwoiFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIlHya3EIHHQRHJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6YXAJHzE0jPyb3gB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"} ]