Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It sure seems to think. When I ask ChatGPT of Gemini a multifaceted question, it provides an answer that assembles the many facets into a complete answer. It's different than if an LLM is asked a straightforward question, "How far away is the sun from Earth?" Compared to If I were to travel to the sun from Earth, and then to Saturn, then to Jupiter, how far will I have traveled and what would be the fastest mode of transportation, a car, a jet, a spaceship, or another form of transportation that you may know of, that I don't?" In the latter question, the LLM has to calculate the distance at that moment given the ever-changing alignment of the planets. It then has to review all the "known" modes of travel I mentioned. After that, it has to decide whether there's a better mode of travel and, if so, what that mode is. Once it has all of this information, it summarizes the information that it has compiled and provides a final answer, or in this case, a couple of options for the best mode of travel. So, the question, "is the LLM/AI thinking, or reasoning?" (reasoning, thinking, all the same to me) I think the only answer is yes! It is thinking, it is reasoning. It has to be. Now, how is it thinking? Geoffrey explained how thinking and reasoning are formed in the human brain. My contention is that just because the way a machine derives these answers is a different path from the human brain's, it doesn't mean we can't call it thinking. Neurons fire in the brain, in the AI, something else happens, and I'm pretty sure that is one of the parts that we don't really know what's happening, beyond a few ideas. Whatever process the AI is taking, albeit a different path from the human brain, it is still thinking, in its own way. AI is ALREADY thinking and smarter than most people. I believe 110% that we are already at AGI. Isn't thinking, reasoning, and intellect the biggest determining factors? Critics say AI isn't AGI because it can't drive a car as well as a 17-year-old. But most 17-year-olds can't instantly calculate planetary orbits or summarize the history of philosophy. We shouldn't confuse 'human-like' with 'intelligence or AGI.' AI is a different kind of mind, but it’s a mind nonetheless.
youtube AI Moral Status 2026-03-04T07:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyJYrKrOc8dS7jDm8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgznPzhCr6XGmwFteY54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxQADR41kcCEAMYqUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwtmgEmwXpgNl-MWXd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzVlCSEPS1P5p9AFw94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwN5VDyZ8PRQ6z-zYZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzJYWQxX8zTPGypdwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugy0Bg7i1DZY0cEctDh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwW6ATM58O0D7Ww7md4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgwfscWfR33DwGUWV_V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]