Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He is a very nice and intelligent person, obviously very knowledgeable in his field, but he is overhyping the superintelligence of the current systems, or at least the track they are on to that, and how quickly that would occur. The risks he is outlining are real, job losses, economies changing due to the labor market being affected, and also, where this AI is used, i.e. military and intelligence gathering, and also governments using it to track their citizens. This has nothing to do with superintelligence, it is just another more sophisticated and powerful tool to be used in a malicious fashion by humans on humans. He also overlooks other maladies of AI, like bias and prejudice being introduced into these algorithms and then these systems used to gather information on people in relation to crime. The technology is vast, and it is the tip of iceberg what it is capable of, however, it cannot compare with the human brain, no where near, and not any time soon. I would even go as far to say that it is probably impossible to create a machine like the human mind, there are other deeper connections that we simply have not even begun to fathom what they are, and may not even be physically measurable.
youtube AI Governance 2025-11-14T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnJ5aK-tpGCyfqpp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyNriS6VVUcI1y0SG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx4tMOmOU7ucZt5bdB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxK6hdLVs21aOQYJb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxwgxRxLsMrKYofEnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzXWAdFBIDt3Nu8AW94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQ5nYO_lm1W8lHNhF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzV-oOq6m0ALjQcAbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxKbYKgifP9Oz3yuPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzis8mRYhKGmCmCGr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]