Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@nathanuncentered6172 I work in the field and have a CS degree. I'm not claiming…
ytr_Ugxs2kHxv…
G
10:10 That's the problem.
Humans should practice, human beings should be able to…
ytc_UgzZ-cWle…
G
They won't feel them but they will make sure to convince you that they are feeli…
ytc_UgwLwAPrO…
G
we have to sometimes pack cast iron parts weighing hundreds of kilos.Spray them …
ytc_UgzO0uvMQ…
G
Imagine deepfake videos circulating on whatsapp and manipulating public opinion.…
ytc_UgzgxoKUB…
G
I find it quite pathetic and sad that Google is so behind OpenAI. Imagine how mu…
ytc_UgwShQaQV…
G
If it's hardcoded to always say yes to being an AI then why is it not "hardcoded…
ytc_Ugw--frEG…
G
The perception of advanced AI as nothing more than a “tool” overlooks the transf…
ytc_Ugw2ENoyO…
Comment
Okay. Question: if we're 99.99% in a simulation now - how is it dangerous if we reach singularity WITHIN an already simulated world - since in order to have a simulated world, singularity must have been reached. See what I mean? Since singularity has already happened with 99.99% probability - what does it mean to reach singularity WITHIN a world that is already a simulation projected by a singularity? It's either or: either we're in a simulation - then you don't have to worry about ai. Or this isn't a simulation and THEN you can worry about ai. See my point?
youtube
AI Governance
2025-09-06T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxOMk-oqQpa0Nfpr8F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzqev8F1V_VwcJegXZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzToaIOn3WQs1Wx8FB4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxf-5EjLDs2vXk6x994AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzuSzai5YTvhjr44MJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugygqj6ZnHAo3FAsmyp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgymAJZtU33TXE9yi_94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxCu_CWN17ezEwv2N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxHjdrzS8dA6bWC8WB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzN8T7bc0eC1GUIFGR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]