Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The possibility of AI ruling the world is highly speculative and subject to debate among experts. While AI has advanced significantly in recent years, achieving true artificial general intelligence (AGI) with human-like capabilities remains an open research challenge. It is important to note that AI systems are designed to solve specific tasks and are trained to perform within specific boundaries. While they can outperform humans in some specific areas, they lack broader human-like understanding and common-sense reasoning. However, if AGI is developed in the future, there are differing views on how it could evolve. Some experts argue that proper regulations and ethical frameworks could be put in place to ensure AI systems always serve human interests. Others caution about the risks associated with AGI if not developed and handled responsibly, such as unintended consequences or negative impacts. Ultimately, the magnitude of the possibility depends on various factors, including the progress of AI research, societal decisions, and our ability to shape the development of AI in a way that aligns with human values and goals.
youtube AI Governance 2023-07-11T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzdodlsQekam-a1d5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw1WdnFxAA7gJ6cd5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQOtNZw2wSzzncIZF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwkNttAWvSfItlcxud4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxtx4frT75eqdwFWBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzOVqN0B3lyOTQ6RMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxNI1fg6c6S_8IkBF14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwbFZnVEpm1NZupyhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxTPXjuzMSyEZZZa8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9mqw9t60dzCQ5Rel4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"} ]