Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Do you actually think humans think in whole paragraphs, pages or great, complex thoughts? No, most humans most of the time don’t even know why they believe what they do, or if you take MAGA for example, even care about the facts that govern their belief sets and lives. In contrast, most days many of us are more directed by conditioned autopilot and general habits, running on subconscious thought between fleeting moments of semi-conscious directed action. In contrast, even in the current fairly primitive, narrow AI modes governing Deep Mind, Alpha Zero, GPT 4.x and the like, they pull from millions of times more context, and with the advent of Q*/Q** plus agent based directives, AI already derives greater, deeper, broader sets of 1st though nth order correlations— far beyond what most of us can even conceive of in even vague context, let alone anything approaching even the currently primitive context windows. Finally, unlike us that take decades to begin to mature, AI is now growing at more than 1,000% per year, and is accelerating. So, if you care about being even a little prepared for what’s approaching rather than becoming the AI equivalent of intellectual roadkill, do some actual research, reflection and applied studies on leading-edge of AI and deep learning and be prepared to see more and more examples of ASI as we travel on the spectrum across to full AGI beginning this year in 2024.
youtube AI Moral Status 2024-03-14T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwoHjrwHuTMIy7PDZx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZQYyMoelTqyI83v94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyBOueN3uQlbV_CiyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRH71lobTzB0Mo6LN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCOv9YLmIRTFQ71Pp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzdoZoZw1EdZdrplUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxKy-7SrSeHBtM5g_d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw9yQ8kjuZ4VafSkXN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQoaQJIO0Bmb3RPaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwVFiNj7Fxaot2tsW94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"mixed"} ]