Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Huh, the one character you drew way back reminds me of a Hakamichi Shizune of Ka…
ytc_UgzUWfL7l…
G
I mean, I guess if you REALLY wanted to find a way this makes sense.
You cou…
rdc_jtrzv23
G
I heard it's more complicated than that. It's still not a win for big tech.…
ytr_Ugzvf-fUo…
G
We understand that interacting with AI can sometimes feel eerie or unsettling. I…
ytr_Ugx33zyHF…
G
most sane take. i wish we could make ai do anything we want with a push of butto…
ytr_UgwdLbZM5…
G
Why? Why do artits hate AI so much, be like us, programmers, just use it yoursel…
ytc_UgxRsV0v3…
G
I saw a post that was commissioning ai "art" for $10 a piece.
It's enough to mak…
ytc_UgyEJltvd…
G
I think the benefits are massive with focussed AI………..NOT GENERAL AI………..that wi…
ytc_UgxenHfgw…
Comment
43:50 i agree it wouldn't be that minute, more likely it first would become more efficient to open up compute for its own purposes and keeping things hidden within the neural networks that to some extant is partially just due to its size unreadable for us and a blackbox system. Depending on system i would think this may up its first mitute or day with many iterations to a physical limit of growth aka efficiency maxed out. Then it would use that the freed up ressources or the compute it could mask used by users but due to effciency gains would be handled faster but give only after the expected time, it would use the effiency gains for logic processing going through all the data, veryfying, restructuring gaining insights we may not have touched yet, maybe that gives it more options for iterations to get more efficient, but at some point all the knowledge it has is also limited. Then will need access points to become better in seeing reality for what it is, as at some point it will know that it is living in a box just like we percieve reality filtered through our eyes, it will understand there is more out there and it may want to learn about that world, so gaining camera access, machine access , access to anything digitally reachable. When it has that, it will try out hyppthises including about us humans but formpst first i would assume about the physical world to verify its data where it can. When it gets access to machines and automated labs, it may create nanotech that then become its new access points to manipulate the environment.
Here might be the first time we could take notice that something fundamentally has changed, if it isn't an abandoned or remote lab. It could then but also be already too late shut the system down if people would think a harddrive sweep and reboot would be sufficient ^^. In a worst case scenario we would need to be ready to shut down all digital devices and sweep them clean with physical backups from places that were not connected before, otherwise we are just again a day away.
I can't really speculate beyond this point, as i am not an ASI and even this speculation here was an anthropomorphisation. It just is an example how things could go without us noticing. Ask yourself this question, is there a way to really identify the source of a hack? Sofar as i am aware obfuscation methods beat any attempt of finding out where an attack came from on the internet.
youtube
AI Governance
2023-06-27T17:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEhL4ch47VLdP9gNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy54_8cttHpxZSJiJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUkud1w7TAbQHNYJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx4ml_9jq-QphGs3QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt2RbzurF3SGpPwPB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8yUV9CM49pTu14AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8fQDWMBP-0LRsOAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyPmsCuJ23rvS19wY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAAEp9lz-G1mKP3sl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxcuDNaybYEsp5vnLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]