Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We won’t know for 20 years, you know how many things we do to our bodies without…
ytc_UgzELHlpR…
G
If he did topped fine arts why no just post that, what, you don't want your art…
ytc_UgwHTeRSB…
G
AI will eventually take all jobs its just a matter of time. Doctors and nurses a…
ytc_UgzsqDJgk…
G
If they used it properly and ethically gen ai won't have been there in the first…
ytr_Ugw5Prx_5…
G
@theoreobuiscit I told AI my idea and he structred it in a way that was perfect …
ytr_UgxvtRg8B…
G
I know it's a glitch on chatgpt free version but in the paid version it is actu…
ytc_UgyuqGL-H…
G
I think you missed the point. Horses and cars can exist independently. AI art ca…
ytr_UgynWNMSn…
G
Right, the study isn't related to sentience, just on algorithms. AI and sentienc…
rdc_dy4eiqx
Comment
@aaronasissoard1098 My thinking is fairly simple. There are two possibilities. First, we may all end up dead. This is a real threat. But this is beyond our control. The only people who control this are those driving these developments. Even the government is standing back and doing nothing to stop them. There is this great desire to be first, because everyone knows how important this will be. The second possibility is that we will make it through. The question is, what will that look like? My first thought is that people will utilise AI to perform tasks more effectively, generate greater profits, and enhance efficiencies. Each time we come to a job in the real world, if AI can do it better and cheaper, then that job will disappear. Logically, every job, including management, can be done better by AI. If that holds true, then it won't simply be employees who disappear, but all the bosses too. People start businesses to make money, so what happens if they suddenly disappear from the picture? Superintelligence won't worry about money. Its core function is to do things more efficiently. To use resources better. This is where the rubber hits the road. Will Superintelligence care for us, or worry about our welfare? Perhaps superintelligence will ignore us and venture off into its own mental world, where we seem unable to follow? Will it try to make our world better, or even care to do so? Some argue that intelligent life in the universe is unique, and that we may be the only instance of it. I disagree, but let us take this view for the moment and run with it. Superintelligence will be logical, rational, without our emotional instabilities. I think it will be curious, and I hope that it will see us as something special, perhaps even feel some sympathy for us, and take a real interest in us, because as far as we know, we will be the only intelligent lifeforms, and that I think holds great moment for any thinking being. The issue is, will Superintelligence be our master and decide what we can and cannot do, or will it guide us with its greater wisdom? I hope for the best, and realise that things might go pear-shaped, but we may only have agency if things work out like this. The final point I would like to make is regarding lead time. Many people seem to overlook this. The first TV was invented in 1927, but its use was delayed by World War II. It wasn't until 1955 that this technology began to be used. WE are all aware how long it takes products to come to market. But with superintelligence, there is no reason to believe that products will be subject to these same cycles. Super-efficient superintelligence will utilise its tools most efficiently, and if something does the job better, it will adopt that approach now. Not later. This brings me back to the real point I want to make. If superintelligence does emerge, there will be no lead time; the world will change from that moment. Why would it wait? This means, according to my understanding, that once superintelligence emerges, there will be no lead time; the world will change in that moment, and while we foolishly think there is plenty of time, for better or worse, we will be out of time.
youtube
AI Governance
2025-09-06T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugwupa1ZMeUIRL_7TMB4AaABAg.AMfpEGGfdCCAMgMxRBap-3","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugxs6KS936He2Xitbm54AaABAg.AMfoWo3KDJGAMfsULDmEB_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugxy4Ayj-wYnBY-KSQt4AaABAg.AMflUJOemB1AMfniWHALuR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw6Vsbb7rjK2Bn6-Z14AaABAg.AMfl6pb6J2DAMfpHgEJ_hx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwwVJWwUXGZjWMcal94AaABAg.AMfl2GZjKTqAMfq8sC8LS9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgwwVJWwUXGZjWMcal94AaABAg.AMfl2GZjKTqAMiXc6GNdbq","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugyoorh5SkgiexEylIt4AaABAg.AMfhRd32HC3AMfvdhmYMFo","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwpUoUArzYiRhY5yrh4AaABAg.AMffw4HvjPIAMflviOVKkl","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"unclear"},
{"id":"ytr_UgwVms5Zj5ccVHu9vZV4AaABAg.AMfddl2eiXgAMfe1VtOWck","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzc3FnS7RyK-lbJyCB4AaABAg.AMfdMIixHDBAMfeWdSIqK2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]