Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When I asked Chat GPT if it could ever eliminate mankind, this is what it said, AI is designed to follow specific rules and algorithms, and it can only make decisions based on the data and instructions it has been given. AI lacks emotions and desires, and it does not have the capacity to act independently of its programming. Therefore, an AI would not have any inherent motivation to harm or eliminate humans. However, it is possible that an AI could be programmed with harmful intentions, or it could be manipulated by humans with malicious intent. Additionally, there is the risk that an AI system could malfunction or operate unpredictably, leading to unintended consequences. To mitigate these risks, it is crucial that AI development is guided by ethical principles and regulations, and that AI systems are regularly monitored and audited for safety and security.
youtube AI Governance 2023-04-18T04:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxRdzZ0LAJX_UoYm3x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzoapS7QooSzfhhdsJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxkMzgEtQAIQeINzZR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1luipc7srNbxEkX14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwrmiDXDneUUyHq2MV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzmkPkP3va4KExrVCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzqXwGYOD7pWU1GX0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzGfN1K5osyRKqiH5F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz1CoUb7hNYHaJ_3s54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyOCCRs8b09y4knq7d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]