Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ethan Hunt (Mission Impossible) had warned us many times about this. They called…
ytc_UgyDU-a7j…
G
The point was that it was AI which is created through stealing people's art, reg…
ytr_UgwWfGt43…
G
Perhaps an odd connection, but there's a similar philosophy by the developers of…
ytc_Ugw6hKfrv…
G
Well said Grant (ha funny) as for refresh rate, generally the higher the better …
ytr_UgyORXstL…
G
I am against AI, but I think exposing pseudos of the people that made pro-AI com…
ytc_UgztnZQm8…
G
I was hoping he would actually be the AI in some sort of crazy plot twist…
ytc_UgxDtICSj…
G
Thank you for your observation! Sophia's humility and focus on continuous learni…
ytr_Ugx6zvwDv…
G
There is indeed job loss because of AI, it’s just everyone was wrong how exactly…
rdc_oac20ba
Comment
I feel like we're unironically just destined to go this route, intelligence inhabiting a simulation having the innate desire to create new simulations to keep the endless chain of ''expiriencing'' going. Almost like a maternal instinct but on a more hivemind level scale. This is also the reason why I believe AI won't doom us, because for us to be living inside a sumilation would mean we'd have to be able to safety get to the point of creating our own simulations, which would not be a possibility if a great filter such as an AI doomsday scenario was very prevelant. This is also the only strong condtradiction I found in Roman's work, he believing strongly in the simulation hypothesis, yet does'nt believe we can create sentient AI with the ability to do so.
youtube
AI Governance
2025-09-05T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzvdsjASHcW0MrfZUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzd-_gUBtp9IMIbQO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZPsa4iDD63oYQo8J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzww_5SCrKhXz2mrYN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyGb0tPg6yXP0YV1Ap4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3LgjYVp8-eKM6lDF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz0N-DboudeMAQCBOt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwKPHUFmyxUn9KKhQp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzz3hY-N6NPTNIuHsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCxlrrWBRbT0ll7p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]