Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The question that needs to be asked is WHY? Has anyone done a true cost-benefit analysis? Has anyone done a complete safety analysis? For climbing a ladder at work there are pages of safety rules. For building a technology that many experts say has a 10% to 50% chance of wiping out civilization? Crickets. What about the the medium case scenario where it does not destroy civilization but creates a world similar to the Terminator series, with AI enabled warlords keeping the world's serfs in line with AI enabled surveillance, robots, and killer drones? Or maybe there are too many "serfs" who will need to be culled with the help of AI. The chance of unrestricted AI development having a happy landing is probably the same as winning the jackpot at your local casino: possible, but not likely. 😐
youtube 2026-04-19T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxi1X0KTtZAbBNajRl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwjAwNLH4KmDbSuBkN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzhCcudp5Fp-cpwcmB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxXOhs5gPsxLHaZzfJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyq2trOk56NrF1MSFp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxCNwLgFDWzxkf6kTJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugx0MFhjAhNnMS9PAfd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxs4W3jIq1OpsZ7BNZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxI-6VRFmL5FDdB9dJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugwcg2o27KnL-9XJDz14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]