Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I asked ChatGPT how AI would (ever) take over (per Elon's claim), when the fact is, we could simply pull the plug on it at any given moment. This was the response: "The scenario of AI taking control is often depicted in science fiction as a result of AI becoming superintelligent and deciding to act against humanity's interests. However, in reality, AI systems are designed and programmed by humans, and they can only operate within the parameters set by their human creators. If human beings have the ability to "pull the plug" and shut down an AI system, then they retain ultimate control. AI systems are dependent on power and connectivity to function, so if these resources are cut off, they will stop operating. In addition, most AI systems have failsafe mechanisms built in to prevent them from taking actions that are outside of their intended use case. That being said, it's important to note that AI systems are becoming increasingly integrated into our daily lives, and as they become more advanced, their potential impact on society becomes greater. Therefore, it's crucial that we continue to develop and implement ethical and responsible AI practices to ensure that these systems are used for the benefit of humanity, rather than causing harm."
youtube AI Governance 2023-04-21T02:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhHaYYQKbdcXRjjSd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz3S_D0PjBXG_jIupx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTM8inCYHBmTIEFMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwK1t9X5dMdZsqWXPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyiWrJAjBcLltVPvGd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxixiHztrqsiPKLvqN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwXr-JxpyKip_RFfdh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw2o41o3Dqrd69HAHh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxf5O9YdP_RmKNmoIt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy78PPgLZ0Yy-cMbO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]