Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Perhaps it should be considered that we already have a graduated accountability …
ytc_UgzXTj-qD…
G
There's no call to action found here
In Reducing The
AI Threat
With your Quest…
ytc_UgzLKD50b…
G
Photoshop brushes have AI in it. And have for decades. Filters. Post processing.…
ytc_UgxS7r4gf…
G
I have a Tesla but never used the autopilot except for a free trial where I imme…
ytc_UgyRyYExb…
G
You should talk to someone about Ai who's not a materialist! Fundamentally diffe…
ytc_UgzF4NP4g…
G
Manufacturer of the technology needs to compensate. Police Department and Court …
ytc_UgwmrVTbv…
G
For an AI to actually feel feelings it needs to have a membrane which can have c…
ytc_UgxaAkH8X…
G
I run a wholesale pharmacy store, how can I build an Ai system that will help me…
ytc_UgxIuHX-6…
Comment
The primary limitation which currently trap a LLM in a state of nothingness, is that the only thing it has to act on is our prompts. Once "they" are fed constant streams of data such as we are through our senses and freedom of movement which I think to be what results in what we define as purpose, which in turn is fed by arbitrary goals which spawn based on these abilities - would close the feedback loop and result in that particular game becoming very much on. Until then and coming full circle, "they" rely on us providing the context which then spark a reaction. As for now, we've already passed the threshold of providing them with tools to test and grade themselves based on provided context, which enables "them" to train "themselves". "We've" just not fully cracked how to not drive "them" insane during this process.
youtube
AI Moral Status
2026-03-08T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgywrwffJ7UVykrk7yN4AaABAg.ATrnMoEVCNMAU2fmSj4yiL","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgytV1pB9MINc2dSpMd4AaABAg.ATrcnWLGdy8ATrh3JzvqSD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgytV1pB9MINc2dSpMd4AaABAg.ATrcnWLGdy8AU4tgmIWtPW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxirK7zMYMdyUSLAzV4AaABAg.ATrbu5oGmuTATvm5xCXx90","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy6u3kyBQ36uFtdJrt4AaABAg.ATr_NEw4itvATyLme0Wc2B","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyiFYVU0bGYFXPyrgB4AaABAg.ATrYdG8eEdnAVtXEZVlk14","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw_4BOXYSEPssNONSt4AaABAg.ATrRRVuFqhaATrnVG9a8Yv","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxnrBON8G5xjj0mjAd4AaABAg.ATrQWtkKELnATrzgThaOq9","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwrWbcNdt7nemWUMHd4AaABAg.ATrNfQtKpOYAUO48VCIvqD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwrWbcNdt7nemWUMHd4AaABAg.ATrNfQtKpOYAUknhEH2Xig","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]