Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Moderator too invoved and at times interupting....almost acting nervous of what…
ytc_UgwPje-xB…
G
It’s way more than stick, stones. Guns don’t take batteries. Steam power don’t t…
ytc_UgwBUriXO…
G
[GPT]: Yes, humans can control AI through programming, regulations, and ethical …
ytc_UgwBccQem…
G
This is dangerous. The bible states that the humans will become wiser, but wea…
ytc_Ugz9IjKrN…
G
@ExHallOfVigilanceResident that man has a point ai saturating others art is bas…
ytr_Ugz1VvEHf…
G
At this point, maybe if AI kills us all I won't have the chance to be sad that I…
ytc_UgyOb3-gV…
G
This is just the tip of the iceberg in terms of the dilemma of Ai in our modern …
ytc_Ugz8whZEv…
G
Yes but their goal is to make you own nothing and be unhappy. AI will be your b…
ytc_UgxfkmQFh…
Comment
Shakey foundation that we don’t understand. I fear recursive self ‘improvement’ where human values and ethics are lost in catastrophic amnesia. Basically it becomes out of our control. And once companies rely on it, backing out becomes impossible. I also fear the “agentic” part where they can autonomously do real world actions. Recently an AI wrote a critical opinion piece about an individual that was not allowing AI access to it’s organizations work… It was not asked to do that, but did it to further its goals. We need to fix and understand the foundation before recursively iterating anything.
youtube
AI Jobs
2026-02-24T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzlJXRIH4TlMlpDoGt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVknMpMHzucV_8wpx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjBhWddkXFoFGJ8Fl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6ITwkmMr2d17OULJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwYj3UBOlYgQfdJdlB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwyzc_8CmZlKsuOYIt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYQYJ-739rwsdG7jB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFCZa1uOnKbBqd6Ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9EtbJF-y0q36JwoR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFoEsN0z1ZgY82-WR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"approval"]}