Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Panelists and positions, in brief: Latanya Sweeney (Harvard University, Professor of Government and Technology): Argues that AI must be designed and regulated to safeguard privacy and ensure fair treatment in real-world institutions. Kate Crawford (University of Southern California, Distinguished Professor of AI): Argues that large-scale AI entrenches social and political inequalities and urges limits on data-extractive, classification-heavy uses, especially over people. Chris Callison-Burch (University of Pennsylvania, Professor of Computer and Information Science): Highlights LLMs transformative power in linguistics while pressing for careful, empirically grounded evaluation and policy for generative AI. Cynthia Rush (Columbia University, Associate Professor of Statistics): Brings a cautious, methodical focus on rigorously understanding and validating complex AI and data-driven systems through high-dimensional statistics and learning theory. Nate Soares (President, Machine Intelligence Research Institute, founded in 2000 by Eliezer Yudkowsky): Warns that self-improving advanced AI could be catastrophically dangerous and calls for urgent, coordinated global efforts to keep AI aligned with human survival. Eric Schmidt (former CEO and Chairman of Google): Predicts that rapidly advancing, self-improving AI will transform every sector and insists on strong guardrails and democratic governance to preserve human agency.
youtube AI Governance 2026-03-23T14:4… ♥ 2
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyFaPo5YEWO9sgresx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzRLgvTqt9ey242tCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy8d0Js_dXIzGicbSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzRRVCgkkUw2nFLUgF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzA7PO0_1cxTs4EzvV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8X2JzGTkDZ77pPkB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyJI14QOZPtdpR5sgh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxpzCnzAEDI-ghOi4p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugycg-hd13Bv_7P3CFN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxR-dZBHirmy3z52eB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]