Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@andrasbiro3007 I want to believe GPT is more than what it seems, but it's does not seem remotely close to a general human intelligence. It's fantastic at regurgitating information and it knows how to be creative with it based lots of examples. It can write pretty well, but it doesn't really seem to know what it is actually writing about. It's not hard to expose holes in it's understanding assuming it has any if you start to question it a bit. I don't think FSD is that specialized. Navigating the world, interacting with objects and planning it's actions is a big part of AGI. They aren't putting it in a humanoid robot because it's only great at driving. I think if it was able to communicate what it was "thinking" in words it would look much more impressive to people. Also, I would love to ask it why it keeps turning on my left turn signal for no reason. I have read Superintelligence and tons of other writing on the subject. I am more than familiar with the various arguments about what could happen.
youtube AI Governance 2023-03-30T22:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyTB3WW0fTYKA-HiDV4AaABAg.9nsEGIbNfaj9nsI8s0_nhM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzGmamRiBEZOxEgKjF4AaABAg.9nsEEHuzYpS9ntGN0D-09p","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxY6NhLLM599fqE1614AaABAg.9nsDuXo11eI9nsFjskyt5G","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9nsYWPOAFSx","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ns_9q5dIvy","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ntz066YoqN","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9o37Xo0tU6F","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nsg9UzLHH6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nswlA-w8YO","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgwJZoDDlg98dprfXbN4AaABAg.9nsCEb0moA19nsKPZl1cM0","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]