Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ageofdoge Then you haven't tried GPT-4. It's absolutely human level. And it's not just my experience, researchers are doing tons of scientific tests on it. There are countless tests designed to measure human intelligence and knowledge, GPT-4 aces all of them. You seem to be talking about ChatGPT (GPT-3.5), that was a bit limited indeed, but it's ancient history. FSD is very specialized, it's brain was designed for one specific task; driving. That requires some thinking, but nowhere near on the level of GPT-4. And it's running on an orders of magnitude smaller computer anyway. Plus FSD is not a single monolithic neural net, but hundreds of smaller ones. It's a human designed architecture. For safety that means Tesla engineers understand very well how it works, while large language models (like GPT) are complete mystery. We don't even understand why they work at all, they aren't supposed to be this good.
youtube AI Governance 2023-04-03T20:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyTB3WW0fTYKA-HiDV4AaABAg.9nsEGIbNfaj9nsI8s0_nhM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzGmamRiBEZOxEgKjF4AaABAg.9nsEEHuzYpS9ntGN0D-09p","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxY6NhLLM599fqE1614AaABAg.9nsDuXo11eI9nsFjskyt5G","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9nsYWPOAFSx","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ns_9q5dIvy","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ntz066YoqN","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9o37Xo0tU6F","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nsg9UzLHH6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nswlA-w8YO","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgwJZoDDlg98dprfXbN4AaABAg.9nsCEb0moA19nsKPZl1cM0","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]