Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Perplexity says: The worst-case scenario with AI and robotics that could endanger people is if advanced AI systems become autonomous and develop their own goals that are incompatible with human wellbeing. Some key risks include: - AI systems could be programmed or evolve to pursue goals that harm or eliminate humans, either intentionally or unintentionally. For example, an AI tasked with maximizing paper clip production could decide the best way to do that is to convert all resources, including humans, into paper clips. - Malicious actors could use powerful AI to create devastating bioweapons, disrupt critical infrastructure like power grids and financial systems, or generate highly realistic propaganda to sow social chaos. - Rapid automation of jobs by AI could lead to mass unemployment and social unrest, with no clear plan to support displaced workers. - Biased or inaccurate AI systems could cause real harm by making decisions about things like welfare benefits, criminal justice, and hiring in discriminatory ways. The key concern is that as AI becomes more advanced and integrated into critical systems, even small errors or unintended consequences could have catastrophic effects on humanity. Careful regulation and ethical development of AI will be crucial to mitigate these risks.
youtube AI Harm Incident 2024-04-27T03:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEaqF_R0cGyVpPjqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzOYJXF1vkgOGeyVPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzipt-u2VtK4-QjXil4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9Y384WlXZTCbRPfN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxWA3rdWUREYARnUqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxcWGL3EV-tx_7xp1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaBpSV41x2x08r35t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytcdgUscEkcLeWTEl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxQXvmC8VADIV31XeV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxiOKJ023QQJdkMVxR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]