Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was interesting to watch, cause I have a planned and already started prepar…
ytc_UgzUjuV1p…
G
honestly even content moderation being done by ai is a really shit idea as it mo…
ytc_Ugw7ilBso…
G
it’s still gross and invasive. not to mention the fact that atrioc was friends w…
ytr_UgzR1HFDI…
G
There is no way to synthesize original knowledge or thought. All that AI is cap…
ytc_Ugw3LCdFl…
G
The other countries were already donating money to foreign aid, most of them at …
rdc_dcwmxet
G
Lol 😂interesting, now that it's the so-called high paying smart jobs, being los…
ytc_UgzNemDzL…
G
We won't have true fully autonomous vehicles until we get to the Singularity aro…
ytc_UgzG3NcAu…
G
Aaaand this is how our future Skynet destroys us all🫣The creators of this tech w…
ytc_UgzGTkmrd…
Comment
Is there any limit to the progression of AI? Is AI/AGI/SI limited by our ability to create technology both hardware and software? At what point does AI take over from us that we are no longer contributing to its progression, being that there is RAG and that has accelerated AI? AI has the ability to give itself feedback to self perpetuation. Also, I realized a long time ago that us humans creating AI is very similar to the creating in our image, that we will learn an enormous amount more about ourselves than we did up to AI. We learned a lot through history but now we are at a turning point that we are making an evolutionary leap about understanding ourselves as well as creating this next evolution of humans... That begs the question what is driving this progression? What is the motivation beyond creating agents/robots to help us get through life? Where is that motivation coming from, because so many humans are sharing this drive to progress AI.
youtube
AI Governance
2025-09-06T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy2xpXcrazVZBr8arx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxgTn-x0T2m3pp8Rxp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwbkoiVBsWFC2FnFQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw9FlMgebAFf-wilMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwukc29pVQcAH9lbUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhgiXGzI7tu7Kz_gx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwxqxjkTSVUTaqhQ614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyoO6v2_nDRjcvdswF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSlrOPNHBXVua2QfN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzIpzzncYpPVxqA14F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]