Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People like Hinton keep talking about what happens when AI becomes super-intelligent or much smarter than people. Such statements are vague, ambiguous, lacking clear meaning. We don't even know what natural intelligence is; how could you compare artificial intelligence (that you don't understand) to natural intelligence (that you probably understand even less)? I've used Google Bard/Gemini for almost three years. It has gotten to the point that it often makes suggestions or demonstrates a line of reasoning that hadn't occurred to me. Yet, it also sometimes acts stupidly, even despite my efforts to redirect it. All AI have characteristic weaknesses. They aren't actually becoming more human; they just make better associations. They still can't really reason very well. I've caught Gemini completely contradicting itself without it exhibiting any awareness that it is doing so, or, if it acknowledges its contradictions, it immediately commits them again. "Is there any model where a less-intelligent thing controls a more intelligent thing?" - Yes; my boss. My managers. The police. Actually, my IQ is in the top 1.5% of the general population, so the odds are that in any power relationship--I have little to no authority or power in most situations--I'm probably going to be controlled by people who are less intelligent than me.
youtube AI Jobs 2025-12-17T04:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzXAqzvWEBntj1Nm4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzZYw1FuSLMpz6W6hd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugxm14HM3-5P_XyatL94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwghVhIwWw_46KOQhJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugx_vqgCtkZoUMjJz8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxnCQXq6SRGm-POBrt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugzp6CX5oy5nobIrwqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzKbBm5f9lkoBbC18B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgybYx_tnUrkggTP8kF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugy7rRq4CcQDG1uuX0R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}]