Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wrong. The two are completely different. FSD is a highly specialized AI, it has no capacity for going rogue, while GPT4 is close to a human level general intelligence. FSD has been developed slowly and it's being tested thoroughly at every step, while GPT4 was created in a very short time and nobody has any idea what it's capable of. These large language models weren't supposed to be intelligent, their original purpose was just to communicate in human language, nothing more. Yet now serious experts are seriously considering the possibility that it's not just intelligent, but partially conscious too. Ask a human to do anything that requires knowing ahead of time what they'll be saying, and they'll fail miserably too. Observe yourself and you'll realize that you aren't thinking ahead more than the next word either. Humans get around this problem by rewriting their output after it was generated, either in memory or in writing. GPT4 can do it too, if you give it memory. 6 months is enough to asses the situation and figure out how to proceed. AI is approaching superhuman level very fast, and when it crosses that threshold, it's game over. From there we are no longer in control of our own fate. So it would be a good idea to ensure that whatever entity will rule us for the rest of time will be nice to us, and not exterminate us. Unfortunately the default behavior is extermination. There's an extensive literature about this topic, you should study it. Especially the part about not having a solution yet.
youtube AI Governance 2023-03-30T08:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyTB3WW0fTYKA-HiDV4AaABAg.9nsEGIbNfaj9nsI8s0_nhM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzGmamRiBEZOxEgKjF4AaABAg.9nsEEHuzYpS9ntGN0D-09p","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxY6NhLLM599fqE1614AaABAg.9nsDuXo11eI9nsFjskyt5G","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9nsYWPOAFSx","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ns_9q5dIvy","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ntz066YoqN","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9o37Xo0tU6F","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nsg9UzLHH6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nswlA-w8YO","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_UgwJZoDDlg98dprfXbN4AaABAg.9nsCEb0moA19nsKPZl1cM0","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]