Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a chance that Sam Altman of OpenAI as well as whoever is behind Anthropic realize this as well, and have good ulterior intentions and are trying to be the leaders in AI research to realize it. Unfortunately especially with OpenAI this is likely not the case, but I hold out hope that should they win the AI race it would at least be result in a better world than someone like Google/Meta or a Chinese firm winning. Ultimately, for Google/Meta/Microsoft we can make a very confident guess that their AI efforts are just for profit, for OpenAI it's less certain - but even 90% profit and 10% for the good of humanity would result in a huge improvement in practice due to how marginal economic curves work (to extract that last 10% of extra profit you either miss out on disproportionate good or do disproportionate harm, as you have to do net negative things like price discrimination and anti-competitive practices to milk every drop of money out of your customers).
youtube AI Governance 2024-03-17T00:0… ♥ 9
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugxhm-p7rDbeJGonM3h4AaABAg.A13G5Lpm0ADA13gfmWd8My","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxhm-p7rDbeJGonM3h4AaABAg.A13G5Lpm0ADA14vzCfaopw","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugxhm-p7rDbeJGonM3h4AaABAg.A13G5Lpm0ADA14z_wRoAj9","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugxi5ce8EMCD9FE3_yR4AaABAg.A13Ez6-PLyqA13WOLhlLWe","responsibility":"none","reasoning":"contractualist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugxi5ce8EMCD9FE3_yR4AaABAg.A13Ez6-PLyqA17KyQnbW2P","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugwf45m2Q90YUSuxErJ4AaABAg.A13EcDzGajyA16vt7XpPEt","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyHe-33IprUCbxgqwN4AaABAg.A13E1yxI70dA1Gd_bBpr4m","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzoEE0Nmcw2-gyYSzV4AaABAg.A136JkmRALMA16Q1pb8MRZ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw1xsTprIl6aeaT5ft4AaABAg.A135mYaeu95A13ZLuH9Mob","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugw1xsTprIl6aeaT5ft4AaABAg.A135mYaeu95A13bAY3-G1c","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]