Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@InjectIilo Your question assumes that future AI will be like today's. That's silly. Artificial general intelligence will take us to places we've never been, let alone imagined. When we get there — and we will — the linear thinking that has worked for humanity through the ages will be inadequate for dealing with this. First, we need to dispense with the concept of “human-level” machine intelligence given that today's AI already calculates more data with greater speed and accuracy than any person. The analogy is dead on arrival. Now consider that computers are already talking to each other in indecipherable languages — as they were instructed to. Others yet operate as black boxes, performing tasks via processes and methods unknown to us, their creators. To anyone who thinks we can just isolate AGI in a box, or just unplug or shoot it, you are deluded. There will not be another human in there, but a machine that can think millions of our thoughts in a millionth of the time, and perceive facts and patterns far beyond our capabilities with thought processes beyond our comprehension. It will be out of the box before you or anyone knew what happened. And geting out doesn't even require super intelligence. A mere human programmer whose name I can't remember, has played the role of an AGI against human challengers, and escaped the virtual box every time. You fail to realize that AI is not just another in the long list of technologies (radio, TV, automobiles, air travel, etc) that we’ve survived just fine. It is fundamentally different. You've yet to shift gears enough to wrap your brain around it. OK, your turn. Tell me how it can be controlled.
youtube AI Governance 2022-07-23T11:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugz_q1np1vzN50eZr794AaABAg.A8eIqmvIonfA8eW4iNvqPF","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyK8d5gSsekKlBXbul4AaABAg.9e0N_7BVlRD9e7B1Za3qSR","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxJtI_i_heJ3tUJ0aN4AaABAg.9dmVKlxrg2C9doKYX_sgqa","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxJtI_i_heJ3tUJ0aN4AaABAg.9dmVKlxrg2C9doUEezPEGM","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxJtI_i_heJ3tUJ0aN4AaABAg.9dmVKlxrg2C9dp6ZK5p08f","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxJtI_i_heJ3tUJ0aN4AaABAg.9dmVKlxrg2C9dpfa4HYibu","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx3zw0RjVc_KaAM53Z4AaABAg.9deVIQvwmVh9dpi4YaKbRF","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwOE2EDtJlCK6gZhEJ4AaABAg.9de46mxTs_e9fN10aqcJF3","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwWnVZkm5UH3tvRJMx4AaABAg.9curzz-EZxT9cyAc-8nBWA","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwWnVZkm5UH3tvRJMx4AaABAg.9curzz-EZxT9cyBkUxSNdT","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]