Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest threat is the greedy selfish relentless human side not the AI !! (and how old is this documentary 4:09 we get cheap gas from Russia :-))) At the heart of our engagement with AI lies an underexplored dilemma: Can we trust chatbots when our faith in their creators remains uncertain? Behind every artificial intellect stands a corporation and its human workforce, prompting us to question not just the reliability of these systems, but also the integrity of their creators. Personal biases may seep into AI, shaping its decisions and actions. The dual nature of humanity—capable of kindness yet driven by ego and greed—casts a shadow of unpredictability on AI, potentially leading to malfunctions, price surges, or erratic behaviors. This raises important questions about how we can trust technology that's inherently tied to human imperfections. The debate often centers on AI's potential to challenge societal norms, including its influence on personal identity and sexuality. However, a more pressing question looms: Are we, as a society, prepared to shoulder the responsibility for one another's well-being? A cursory glance at contemporary events reveals a shortfall in empathy, suggesting a collective unreadiness to navigate the ethical mazes posed by AI on human identity and values. In a world where the need for personalized AI stems not only from a desire for connection but also from the machinations of a profit-driven society, one must ponder: Can any entity born of such a society truly mend its fractures, or will it merely offer solace to a select few?
youtube AI Governance 2024-04-02T16:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyfKD5BOFRSb-im6Jl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxtu4ghllqoTUrQMed4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyR80pW6vL1KNEsPuJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxeVxAhksE1gfqc3Ix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxXSPi27CBbqbWnNV94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRH1t-Sc7G4Lk2Twx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx414cmwz2LacPO3xh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQF9ibiK2lTeI1K914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxG9c3gXhcUjdeqpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzdKpjA3_yI1FA2uE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]