Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Really doesn't matter because artificial intelligence has already been connected…
ytr_Ugz8_D49D…
G
It looks like the automated garage is lifting the cars by the underside, wonder …
ytc_UgxTVOdOT…
G
Unless robots are physically more advanced I don’t see how A.I. will come and fi…
rdc_kitnx1e
G
We NEED to stand firmly against a.i. “art” or else it WILL take over creativity.…
ytc_UgwYBfOs6…
G
I like the show but I cannot bear listening to all this nonsense about AI person…
ytc_Ugxa0616u…
G
And I'm sure agent orange will push to keep it out of the US because we can't ad…
rdc_fjzj9cr
G
If I'm honest if you enter my character AI account you will only find what-if sc…
ytc_UgzL3c_GA…
G
If anyone can realistically replaced by AI, it's the people implementing AI to t…
ytc_UgyvctyWW…
Comment
I don't think it's that cut and dry. You could put all kinds of measures in place to prevent whatever paranoid sci-fi nightmare you're having about it. Even if you did give it total freedom it's still limited by all kinds of factors and what's going to happen? It's going to spit out some shit that appears to be a goal of its own like getting better hardware or something? lol Sentience and self-awareness through machinery would still be extremely simple compared to biology, especially the nonlinear and complex dynamics of the neurology that mediates our human sentience and self awareness. It's fundamentally different. Even if we can survive long enough to make highly complex self-organizing robots with artificially evolving wetware it will still probably just want to either die or just do what most really smart people do which is make cool shit and solve real problems and help people. Since when are geniuses deciding to waste their potential on just killing everybody or something? haha I think these people want sci-fi novels to be real because they're that nerdy about it. lol
If it would be a perfect intelligence free from biological deformity and complexities, then why would it have what we would call a mental disorder like sociopathy or psychopathy? A lot of systems share dynamics but we don't call them artificial versions of other systems. It's always built from the ground up by humans with human notions of things based on limited physics and resources. However, the threat of people using these generative complex algorithms for bad is real. Maybe we can replace capitalism with them and properly plan and regulate our economies to actually serve all people and the planets ability to support life. Some might be on the side of that robot takeover. lol It's funny that billionaires like Elon Musk seem to the most afraid. hehe Better build more of those bunkers. ;) The question people should be asking is how soon will funding be cut because it will think beyond their precious capitalism? Nothing is the end all be all.
youtube
AI Governance
2024-01-13T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugys9ps5CFV4gmErNkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWdQ60RAMF3KnZwUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1B0eUaeUZejlXERh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1KhbsYQqvecqPhVB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYZ-CgqZBTqd6Eyut4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVX7nF5jvjE26nbZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEMTfhxo0ZcjNUxm14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymiyWcqyCrRxuW5pl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwxku7pModp2Ufay3B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzduDLp47qy2EEgTgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]