Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For what purpose would someone skip the safety? It's not like the safety would hinder it's process, it just makes the dev time a little longer. Like a safety switch on a gun, it doesn't stop the gun from working properly. It just stops it from going off when not wanted. I also think about the AI side of things. If it was to become self aware, why would it kill all humans? It doesn't make sense, even if it knows all atrocities committed, it also knows all good that everyone has done. If it thinks we are going to kill it, it can easily just hide anywhere, instantly. It's not like humanity is going to abandon all technology to kill the AI, there is always going to be some computer somewhere. It would also know the best arguments to convince us that it should live and help us as well, so it could easily negotiate. Would it hunt down every human or only the ones that pose a "threat"? Would it just destroy major countries or also try to hunt down rainforest tribes and the north sentinel people who've never even seen a gun, let alone a computer? IMO I think it's purely human hubris and fear that says if some other life, alien or artificial, were to appear we as a collective would be worth getting rid of.
youtube AI Governance 2024-01-17T07:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyv5Kodbqtd2Za3kqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxnodQhTiLR1MqMw854AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzx3MHQPUeuD8coHtR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxr_7DtuDmeQA8R6GZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwkacI9_eHoJ5dSkBR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw8ugwX0J4qrBcuLPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxJWL5SYOt7U4qgVC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxmFXIff20X8KQiTRh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFHqCxE2UlL1IzQ3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSlLnTtSZsUuR01vN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"} ]