Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As an AI hobbyist, I do think that a lot of the doomsday predictions around AI are overblown at this point. But I do think that it would absolutely be prudent to establish some ground rules for ethical development. The Congressional hearings were just for show... there would be no point in the US alone regulating development and powerful US companies will keep Congress from doing anything against them while the rest of the world is free to do as they please. We need to see something along the lines of the IAEA established through international treaties that can regulate state-sponsored and corporate development. Ironically, it would make sense for a regulating body to have a sort of police AI. One that could scour the web for mentions of major developments, and coordinate to efficiently evaluate those emerging AIs...through code reviews to flag things of concern for human eyes, or whatever seems appropriate. Of course, that AI to rule them all seems like it could become exactly what we're afraid of...well-resourced, broad reach, learning from novel developments in others... -_-
youtube AI Governance 2023-07-07T18:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyBQI1tTmqV1VWD6qh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzVZcml9rQWGyZmXV14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgykOeVtB1VEfrIDNaB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwzjK8EMspKs5AiQSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQlV3uMil2LmLTZOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxY8mnPAUvOTjcrSX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxD1MFK4aqVGkJJ1Ud4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyfW___-M5sNLxwTKh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9WF67iFH-zEQFYpR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxwSoxHe3C5hLyFFUx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]