Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Was literally going to comment the same thing with the same time stamp. The US a…
ytr_UgyiC1270…
G
Turkey just obliterated [Russia/Assad in Syria](https://www.insider.com/turkey-d…
rdc_gs5sr2d
G
We have had robots in production for decades now, some jobs are still done by hu…
ytc_UgzdRQ7Hs…
G
Breve: o Mundo passará, a pagar seus habitantes, uma Renda Global Universal... a…
ytc_Ugw2255dE…
G
“b-but we program the ai to do what it tells us to! it makes it better!” yeah ho…
ytc_Ugx7Md__e…
G
automation and AI are two very different things. automation is the thing that's …
ytc_Ugwng-gKD…
G
# Ai Weiwei (b. 1957)
# *Porcelain Cube in Pieces, 2024*
### Porcelain on ston…
rdc_lopjx4f
G
You completely missed that no amount of attempting to put the genie back in the …
rdc_kithpv5
Comment
Dr. Roman Yampolskiy is clearly a brilliant individual.
Very good interview, although he completely obliterates his entire argument regarding AI safety when he talks about his certainty that we are in a simulation.
The logic path regarding AI safety he adheres to, by its intrinsic reasoning, fundamentally implodes the argument for safety, and the risks respective to a billion or multiple billions of people.
If we are in a simulation, and NOT sentient beings that in fact exist in physical as well as spiritual form, then ethics / morals / safety, in fact do not matter.
The only logic path that supports reasoning for safety / morals / ethics, can only exist if in fact we as humans are sentient beings. Not a simulation.
In a simulation, there are no existential consequences for crime, safety, jobs, dystopia, or any other aspect of risk. Particularly considering such risks to humanity or ethics, have no effect to anything that does not actually exist. In other words, if we are in a simulation, we do not logically exist. Therefore, safety does not matter and our placement in the simulation has no consequences and harm against a simple algorithm which would be the feature of a simulation, then would have no meaning.
One cannot logically have it both ways.
Either there is a risk to sentient beings, which by logic cannot be simulated, or there is not such risk because we simply do not actually exist.
I have debated this with quite a few PhD level educated people and they lock up completely every time this contradiction in their logic path is discussed.
At the end of the day, if we are units participating in a simulation, then by the laws of physics and conservation of energy, we then must all immediately self delete in order to conserve said energy rather than transferring such energy to non-productive activities.
youtube
AI Governance
2025-09-15T02:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbMd_M1sSoolQPNj54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfI2BkiJS_EPG9bKF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyp0skps8O7ekhM49V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyUjECTsD7m0g4hcHF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnbfHTu_bs86A5k7Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwU2XA9gbQbjYxF4A14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwH4BbmsWCSSUE-Tgp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyih_iBofxWp_rdxbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjbwJIa_GeaHmdjct4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaIMZYMBRyEBzgaxx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]