Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I asked ChatgGPT if I should be worried, here is the answer: No, you don't need to be worried about AI taking over the world and eradicating humanity. While AI has advanced significantly in recent years, it is still far from achieving the level of consciousness or intent necessary to have such destructive goals. AI systems are created and controlled by humans, and their actions are ultimately governed by the goals and intentions programmed into them. AI is designed to assist and augment human capabilities, solve complex problems, and improve efficiency in various fields. As long as AI development and deployment are guided by ethical considerations and appropriate safeguards, AI can bring about numerous benefits to society. However, it's important to acknowledge that AI technology, like any powerful tool, can have both positive and negative consequences. It's crucial to prioritize responsible development, address potential risks, and ensure the appropriate use of AI for the benefit of humanity. Ongoing research and discussions are being conducted to establish ethical guidelines, policies, and regulations surrounding AI to minimize any potential negative impacts. So, while it's natural to be curious and vigilant about technological advancements, there is currently no reason to fear an AI takeover. AI development is driven by human decisions and values, and our collective responsibility is to ensure it is harnessed for the greater good
youtube AI Governance 2023-07-16T16:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxEb4r9w0pWhjHTJ8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzLNtXBLg8PvxVLOp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFKpHWqHdV2WuOKVR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5pEGhUnCc_8OXQIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwaODy-Emq7a0aIWi54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjcVg_fG4lwpG-8kt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZoIiaWJ-yzSQ-VXd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxKeivjlTNTCfSaiy54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyivm7SCZzeDQeB1V54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzNR5t9qQyLZDBH3Ux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]