Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thanks for summarising the hearing. Listening to the discussions I think some good points were raised especially around having legislation to let people know they are talking to an AI chatbot for example. The scary element for me is what kinds of data companies like Meta, Twitter and Google already have and will be training their LLMs on. I mean the data OpenAI used in comparison was much more restricted as they didn’t already have access to data that other big social media and search companies have. Let’s consider decades of search history questions on everything from politics, voting opinions and personal opinions of almost every single human being that has, or had, access to the internet (via Google search and other search engines) and so many people express their opinions and interests via Facebook and Twitter. That data needs to be secured and regulated for AI use but it’s likely already too late. I wonder if that was Elons real reason for acquiring Twitter especially now that he purchased all those GPUs and said he was developing his own AI. And I wonder how much of search engine search history and other data Bard is trained on. To be fair, I do think that there is too much focus on throwing Sam under the bus, GPT-4 will not be in the top spot for long. Open source LLMs will reach GPT-4 capability in just a few weeks for sure. People in their homes, and many unregulated companies, (and some likely not with the best intentions) are working daily on improving LLMs. I doubt legislation is going to stop progress except for startups or legitimate companies already officially doing AI stuff. And what about other less friendly governments or criminal organisations? There should be a larger international focus here, ChatGPT and the OpenAI API are just one facet of a much larger picture. This YT comment will likely help train Bard too, so… 😅 Pausing development will only be honoured by legit companies but many others will not.
youtube AI Governance 2023-05-18T03:1… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyLS1trPRBAPR5pgr14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuYAEz7fKvqc4vXBR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQoUbij26wj3NU8al4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxcPxHFY1M8UTpOcQt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgydKmjmvzIU9PfDezN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxw5vChI_dLkVwDMb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOEfNLEvQBCpYfE1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwntFMDCucFTPxOIp94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxnp-_gHSvGXWtDr3Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyyuWyKx8LwsRgIwIF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]