Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm generally an AI skeptic, but it has its place. No matter how good a human do…
ytc_UgyyrijzA…
G
Big Pharma is going to hate it when AI starts diagnosing problems caused by phar…
ytc_UgwOSmeOG…
G
That's a thought-provoking perspective! In the video, Sophia highlights that whi…
ytr_UgzENoBcR…
G
A great example of this is something like over the cotton wall, where the art st…
ytc_UgzF9pfmw…
G
@kecksbelit3300 „Saying AI is a money grab is crazy”
How is accepting the truth…
ytr_Ugxn8fAhk…
G
I strongly suspect that current A.I. generated by classical computers, which are…
ytc_UgzS384EM…
G
Sam Altman is so shady, did you see the Tucker Carlson interview, bro had the gu…
ytc_UgwnGmj3O…
G
The whole thing is a mess. I had a WAYMO self-driving van turn left right in fro…
ytc_Ugy09Pax8…
Comment
The concept of simulation theory reminds me of “which came first, the chicken or the egg.” Hmm, if we have the capacity to digitally create, of course we’re going to create what we can relate to. We try to make simulations realistic. But does it really lead to this life being a simulation? If I am in a simulation, I would have spared myself the pain I’ve experienced. In a spiritual framework, my pain has made me more compassionate toward others. In a simulation context, I just gain points for some reason? And this guy says that part of a successful simulation is to talk to famous people and be good looking. That seems pretty shallow. I’ll stick to the non-simulation way of thinking.
youtube
AI Governance
2025-09-06T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzNZe15NgLZycdTvSV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0ncp29PHNJmp3ZIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxibL3GfoPtUX8mmd54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyg_mds4g98i2ZDuI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwE8tHQTWMqdVcvcul4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwkQbSi7qOJonH_LZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxDyXOK4jGfuDn9iS54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxK8hes3sw9eeQ_QEV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQY2xIagZRdJhxsMh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyd73s0lXCgV828of94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]