Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think we will reached at that point. We are already reaching a significant limits on quantum computing and quantum mechanics. By far, we are going to have diminishing returns. Robots have a more fundamental problem that they need to address as well. They are simply performing inductive processes that are not helpful for general-purpose knowledge. Humans learned by guessing and checking, of unjustified and unjustifiable leaps of intuition, of trial and error. We simply used our observations to correct our guesses. We creatively conjecture how the aspects of the world works (as if we were programmed in the first place), and we use criticism and those observation to disavow us of those ideas that are false. Larson showed the true nature of artificial intelligence and it's future. It may not be what we expect it to be. Robots of the future might be like a narrow, puzzle-solving approach as the founder of the field, Alan Turing. "Turing's great genius was to clear away theoretical obstacles and objections to the possibility of engineering and autonomous machine, but in doing he narrowed the scope and definition of intelligence itself. It is no wonder, then, that AI began producing narrow problem-solving applications, and still doing so this day."
youtube AI Moral Status 2021-12-15T04:0… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwRO4pqK5aHRAGv73J4AaABAg.9VGquFxcM209W5G1rw5sjr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugw3AvpYj3TaxQBZZYF4AaABAg.9Ur1ztlmAX79VC5XUuFGY6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyv9tkouzibwPFFYod4AaABAg.9Tw_0-kgv_x9VByw713b9-","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwLRB1fFPW11xBksxN4AaABAg.9TwPQ96LdTy9cM4iv0-8Qy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwBxMlYM_dRz0eDKt94AaABAg.9TwMGtSiR3f9VxpEHman_e","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx3UbfEtgg_laxC-ct4AaABAg.9TrrXNJ5NyR9TxJT4mAQ_J","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwkK9cUjqOJah-X5WN4AaABAg.9TmU5YkLyzs9TxLeTMCJAi","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwkK9cUjqOJah-X5WN4AaABAg.9TmU5YkLyzs9TxOQF64R9H","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugys6dPXn_V_siwPEVR4AaABAg.9TdqFUK5iz09Tl1wcA-60E","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgxHqSLcILAhxgUhMoB4AaABAg.9Tcvvq_WpyM9VVfr26-Dyi","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]