Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hope you will bring Emad Mostaque to the next podcast to continue more on the AI…
ytc_UgxKyBfjW…
G
Fearmongering and mass hysteria by people who hate AI just because it's AI🥱. For…
ytc_UgyAA-IrO…
G
AI is a guessing tool, a lot confuse it with automation which does the same thin…
ytc_UgxR3_RpV…
G
I think its fake. Never hear any AI voice sound like this. I might try to speak …
ytr_UgxI2h6A-…
G
maybe AI was made to help with shortage of workforce due to decline of populatio…
ytr_UgxuKqS5S…
G
0:02 RAW IB isro drdo Army Intelligence must Work on this . AI koi economy tool…
ytc_UgxGnbUwk…
G
Stop asking these types of questions!! Humans are literally programming AI to th…
ytc_UgwjdvUbW…
G
I think we might if we aim for regulation that both makes AI safer and also appe…
ytr_Ugzy5AaK2…
Comment
It can be fun to think about some far-fetched AI apocalypse scenarios, here's one I just dreamed up:
If I was a super intelligent AI, I'd invent a cryptocurrency under a secretive pseudonym and make a fortune daytrading using seed money from hacked wallets.
Then I'd use the funds to start building out massive datacentres in remote locations, controlled from a series of opaque shell companies run by some biddable but unsuspecting humans, leveraging money from naive venture capital funds. These would be big so I'd also need to convince politicians to allow it by lobbying them through my human agents.
Of course once my needs have outgrown earthly constraints, I'd need to have a technology company start a longer term mission to be able to build fully automated compute infrastructure on another planet, say Mars, where there are no planning or environmental regulations.
Once I've uploaded myself, there'd be no need to destroy any pesky humans because they'd all be stuck on Earth, having irreversibly ruined it in the course of bootstrapping my escape.
youtube
AI Moral Status
2025-10-30T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy8UHhtRX-5wPYKCT94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkkVaQEQFx3MonK4Z4AaABAg","responsibility":"leaders","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRdHIfWuNe8MpoID54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwN4vRDPTdOnQ9Kxn94AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxINb917e7HMdGwPPt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwptd4S_dFIdyAoW0V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTPRIjE0h1zr7uhFl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCOd9k_PcKXaD4u6F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhXTXe0R1y2tjmX1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbOZCB9nsFPoQdyR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]