Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The cultural historian William Irwin Thompson warned of the reality of the phenomenon “enantiodromia” according to which one starts off wanting to do something good only to have it turn into its opposite. This is what is happening with AI modeling. The problem is that computer scientists for the most part work under the mandate of corporate profit rather than the mandate of “first do no harm” (principle of nonmaleficence). AI safety takes second place to increasing commercial capability of AI models. It is no wonder then that catastrophic harm is very likely as these companies move towards AGI or even superintelligence. And if and when that happens it won’t matter that someone laments the onset of catastrophe by saying “I never intended that to happen.” We have been forewarned, yet we ignore the warning to our peril.
youtube AI Governance 2026-01-07T13:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzesVC2fNbzorJKwEZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy3xQz8zKD8ApbXqrB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyx48xRodopeGKrvFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzqugwni3NHFQBVHFJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_giddtszVAZIAyeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwo8TctWPQa3lB_whx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy6vSx83zfa_kPliLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqStuTdkR57OWbGiB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz0PS3HBIl8EDk2F354AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz7bkNKsymd12Igu4B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]