Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is no way that a robot could deal with senile person having a CT scan ene…
ytc_Ugw8LRrsO…
G
Ai.
who provides the info or knowledge to the Ai? Who or which group forces t…
ytc_UgzilZ-zG…
G
@ColdLass478 Yes, I believe that's called a "self-fulfilling prophecy". I never …
ytr_Ugy99Mv3d…
G
Why would anyone pass up on using AI at work? It works 1200 hours a week for a t…
ytc_UgxFYvTqo…
G
Why asking Bill Gates?
He is not involved with AI, never contributed to it, neve…
ytc_UgzdVIm40…
G
Just be careful. The code generated with ai is pretty bad, and other output make…
ytr_UgwXgNPVk…
G
AI is a tool...its abilities will be used and exploited by those with an adept m…
ytc_Ugxd6xAF5…
G
I dont think AI in itself is dangerous, its just that people (specially those in…
ytc_UgzjN2roO…
Comment
What should concern us is the asymmetry of progress. Human experts lose ground every day, as the information in their domains expands beyond the limits of individual cognition. What you mastered yesterday is already obsolete today, which means that relative to the state of the art, you are effectively becoming less competent with time. In contrast, artificial intelligence does not suffer from such decay. It improves continuously, limited only by computational resources and storage capacity. The trajectory is clear: as human expertise erodes, AI capabilities accelerate. This is not the future—it is the present. And the most consequential phase of this transition has not even begun.
youtube
AI Governance
2025-09-07T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNxtwYDsL0n9j0iLp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEGwTpPkNp3Q67pxN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzBzIKWMqiVhN9LVC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxSvKHhxM5fxRe13Pd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKM2ZKhkFXjCryvRl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyB9Jwp7MDKroruUVh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5jq5AoJOESPFVuMp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2CQ956EyyWzaoVX54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjWr4poaVBonjtn9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwbLeSmcmf7oRjc1V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]