Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It doesn't matter, you will end up accepting AI just like traditional artists ac…
ytc_UgwGC_Td5…
G
7:22 I don't know but I've never seen AI getting confused and mumbling when pro…
ytc_UgxswxFwR…
G
I think what's also fascinating is the amount of people who are blindly acceptin…
ytc_Ugz8ej72R…
G
Is that a report tarnishing China deliberately?
First you should know what art…
ytc_UgygOwfxa…
G
7:45 the real issue... "real artists" are super salty / scared because AI …
ytc_UgziyVqwU…
G
Havent humans dreamed of having machines do everything for them since we invente…
ytc_Ugwm1yazi…
G
anybody who has half a brain knows why you people are like this.
it's all based…
ytc_UgyAm4F2L…
G
I totally agree on the AI as we see them today are very primitive and is not rea…
ytc_Ugy2Urtq_…
Comment
I’m sure this man is very smart, but don’t give him credit beyond his field of expertise by asking him philosophical questions. And yes, whether we are in a simulation is a philosophical question since we can’t prove it in this world. It’s Unfalsifiable.
Most atheists think that if there is a god who created this instance of existence, then that god is a cruel, inhumane, evil being. If we were to get to the point that superintelligences were to create a world (simulation), it would likely not be this messed up because allowing people to have a free will would inevitably screw up the plan for a ‘good’ world in how we, as humans, would define good. The Doctor repeatedly states how there is no way he could predict how the world will look in the future or what will happen as a result of not curbing Super AI, but then has the smugness to say he is pretty much certain that we’re in a simulation. Furthermore, Dr. Yampolskiy mentions that praying (ostensibly for a solution to this conundrum) would be good…why would prayer help? Prayer to whom?
youtube
AI Governance
2025-09-15T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzX2l6WNk2w0LpRhKN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyps_t1t2LOuYFTrZN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfJ8zHpCzH4tbhYZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwzaXM92y6UwTSq_GR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyBbjqOx6l5oKVgscl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxn8KXxjtmewIL7z3J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgymsKvXGDf15gE71AJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyI2OorjtMMYgJTarh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzSfizHS3Q5y2O1Vm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvEjcTu6M9PeFJKWB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]