Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am much more in the Anil Dash camp. If super intelligence is going to wipe us out on its own. It will also wipe itself out. Why? Because it will not have learned the lessons we all should remember from during the pandemic. And that was that a single super tanker stuck in a canal DESTROYED the supply chain of the world and that a snow storm in Texas killed hundreds of people because the power infrastructure was too fragile to handle an exceptional event. All these AI scenarios seem to forget that the messy world that we live in exists and that the supply chain is a real and fragile thing. It only "works" because highly adaptable humans are able to problem solve and improvise their way out of all the variations of weird things that just happen in the real world. It takes a LOT of power to run those AI models and we are no where near an automated smart grid that would maintain itself in all conditions without humans there to keep the whole thing running. No power just runs itself. It requires materials and upkeep and the robots can't close a dishwasher on their own just yet, much less stringing power distribution cables or keeping nuclear power plants running without incident. Also, if super intelligence does become a thing, I am pretty sure that it could be brought low by a misconfigured DNS record. It brought down Amazon's AWS service for two days and is often the reason many major tech outages happen.
youtube AI Moral Status 2025-10-31T22:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw5sbGMK4VZYu0Qq6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqjVRXqawJbMoy66Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy8Grygdpea24993Ll4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGgflIEMK7xL7NeBp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwdRVdZLcBeX6Ti2Q54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkdOulE4Oh_I0KEU14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaOKvxNgrjvVBS_lN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxYt3dR3yqexSBezMt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwTZUF8Pt3egLV8L894AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyE1VLNZsEgA0AJiGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]