Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are narrowly focusing on current day, available-to-the-public LLM apps, that of course cannot know things or think things. But these programs are not written so much as they are grown. And in that growing process, we have no idea how an LLM develops its own goals, its own sense of morality, of ethics, but they do. The danger is not that ChatGPT 4 is going to all the sudden hack NORAD and launch all the nukes at once, but that we create a program that itself creates the next iteration, which then improves on that, and creates the next program that continues self-replicating and self-improving, while improving exponentially at improving.... we cannot grasp how much smarter, more advanced, and more in control this program would be compared to us. We don't have the mental capacity to understand the gap between this machine and the AI researcher working on it. We become less than an ant, thinking it could control the movements of the sun. So if you think there's nothing to worry about, that AI is just a word prediction software, welp you're in for a bad time. Eliezer has been giving lectures and presentations on this for decades, the transformer technology that allowed us to make LLMs was just invented in 2018. You're not even scratching the surface of the danger of ASI.
youtube AI Moral Status 2025-10-31T01:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxV2YgRxgdc1F1hK-R4AaABAg.AOvM4T10xpkAOvNvY2zOPm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwYohzxjxoYmuBkcrV4AaABAg.AOvLwiPDLg5AOw8UQiEFHu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxdbEbiCvFBPNeRhVl4AaABAg.AOvLpw2KwTkAOxRn4grOZw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugx-BFV-_V6K0ci-9zt4AaABAg.AOvLkrWRjbOAOvOSRAtdLc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzhtTk1YCtWceP2_AN4AaABAg.AOvLTNtQLvIAOvRlNgOKty","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz6SZZpuyPn790yGol4AaABAg.AOvLLdXmfVoAOwpi2yF6Db","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzcvYv9pPqb6vLCPyJ4AaABAg.AOvJor9i0AhAOvR5oFFPoj","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzcvYv9pPqb6vLCPyJ4AaABAg.AOvJor9i0AhAOveENDdL6M","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzcvYv9pPqb6vLCPyJ4AaABAg.AOvJor9i0AhAOvfyxWosRG","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwGH9cCpJAkbZd1AwR4AaABAg.AOvJVsX79qUAOvQ00PK7Pw","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"resignation"} ]