Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"A good artist uses a tool (ai) to create something cool and at the same time so…
ytc_Ugwg7DONa…
G
What is interesting to me is he compared the AI to a tiger and not a dog which c…
ytc_UgyjJ74t_…
G
Is there a scope of AI in VR? Say yes or no
Yes
ChatGPT ♥️♥️…
ytc_UgzKYL2mR…
G
This isnt really true, deviant art does not train their ai on their artists work…
ytc_UgzbhKMWq…
G
I'm a heavy AI user and it's nothing more than a probability-based word processo…
ytc_UgyixknFt…
G
I'm noticing something similar, but it's likely because we're both operating in …
rdc_oae856v
G
AI doesn't need to become fully sentient or have the ability to bring about our …
ytc_UgzbbRm08…
G
So funny the same state that says everything cause cancer is the same state to a…
ytc_UgyYkbAfv…
Comment
There is no evidence that AI can work without human supervision. You might say it will eventually get there, but so far that hasn't happened. AI can certainly do more things than it used to, but in every single thing it does it makes mistakes. That means humans are required at every level. That includes training of AI. It has already been demonstrated that if AI is used to train AI, the quality of the result gets progressively worse. You need new non-AI content to train AI. And even with all the supervision, the content will eventually run out, and AI as we know it now will hit a brick wall.
youtube
Viral AI Reaction
2025-11-24T15:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxH73MNIB2ymK0tLhB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8Z7GS2z0yzhz1crp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-QtuMn5SYrnZqv154AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVh7OHncSZo0BY0Zl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJK-fe5yjpW_0cIxN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzLETmjHh4sAkio3Kt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwxHdc_m-tSlXOrTJl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHMRwgEuQIl4zVE-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsCosPTuzNNiIxz6h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMYyd1y7gtPjC-1XF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]