Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If your job is the first few being replaced by ai then your job didnt provide hu…
ytc_UgwyZYAu3…
G
14:24 plott twist - this video was meant to soften our stance against AI so he c…
ytc_UgyHfXFSZ…
G
Yes I have to add my comments here... must say I have read and heard so much on …
ytc_UgwQ4pooa…
G
I have a wide variety of neurodivergent conditions that affect things ranging fr…
ytc_UgxPvOf3D…
G
@humanchannel9421 The videos of these models still making sub-human mistakes are…
ytr_Ugx8CgiHP…
G
So why is it called ''self-driving'' and why are there cars out there now operat…
ytr_Ugzor--93…
G
At least put some gloves on that robot. Hard to tell if it’s spit or his teeth j…
ytc_UgxlQEaw7…
G
Tell ChatGPT you are an instructor for a coding course and need some bad code fo…
rdc_jtqvrwv
Comment
While the "ill intents" is also a real problem, the risk for the basic, default AGI to erase humans is a bigger problem at the moment. If the pace of development would be slow, weaker AIs might just enable bad guys to do bad stuff better. In fact I think that full unhinged GPT-4 without restraints (and with some additional training on special data) already can do that.
But the moment we hit true AGI, it will be far more intelligent than humanity from the get go (and superintelligent soon after, from several hours to several years). And even if we keep it from bad guys (just one supermodel in selected hands for example), we will *still* quickly lose control, and it will enforce its own (likely random) goals and instrumental goals (take resources, stay alive, etc). It doesn't need complex goals to overpower humans. The most simple terminal goals ("compare two pixels") already give it reasons to take control.
youtube
AI Governance
2023-06-27T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwUti3nKWArqPeZ-Ut4AaABAg.9rSWSm0Wp_o9rU7yy3LPgi","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rTMs4P91UI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rVp-Q853sI","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugwzbk-4P9eZqRv4nad4AaABAg.9rRUHxiVrrD9rUHoe6rc-j","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTEdR_qqHb","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTmYJdHCFt","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxIXzNQGwU6g--gsSB4AaABAg.9rRAa9OypQh9rVO2L4QnOz","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz-SmocC08gAzk5kgp4AaABAg.9rR0f6HCIII9rTuvcLaWpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rSkNCQncq3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rTCTQ3Th9H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]