Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why microsoft will regert what theyve done to me, Thesw cyborg 5g ai cov…
ytc_UgxDGexE9…
G
@ZeeGalaxy301OFFICIAL Never said it was ethical to steal people's work. There is…
ytr_UgwCZsHFj…
G
Yes. Its artificial. As in fake. Good for government grants though. Give credit …
ytc_Ugyn7SiOF…
G
I've been vibe coding my own program for the past few weeks and I'm also in a ma…
ytc_UgwxPb-L5…
G
AI looks a great way to learn but what it can’t do is explain a point in another…
ytc_Ugy2XOwap…
G
on ur final statement. all of time is happening in the same ONE 'eternal' now m…
ytc_UgzrNX20F…
G
You know it's true when even the video warning of the dangers of Ai, was itself …
ytc_UgwW_Wbzi…
G
We asked Ai if they would destroy mankind .. Ai says no lol ofcourse it would sa…
ytc_UgykTL7m3…
Comment
ASI is like fusion: Since twenty years, it will happen soon, except that what we currently have would've looked like ASI to anyone from 10 years ago.
Basically, if I showcased the capabilities of the current SOTA models to Computer and Data scientists back in 2015, and asked them on when they would think that we'd reach this level of sophistication, the vast majority of them would definitely not answer by '2025'.
Now, I fully believe that LLMs are not capable of thinking, not because they hallucinate often, or even answer prompts paradoxically within the same paragraph, but for the simple reason that they are unable to correctly explain their chain of thought, despite having no problem in communicating with us otherwise (literally called Language models).
My point is that even 5 years ago, the current AIs that we have, as flawed as they are, would've looked like something out of a Sci-Fi novel for the average expert researching the field, let alone the average Joe, and even if there's a modicum of chance that this trend will continue, then we're under-hyping LLMs if anything.
youtube
AI Governance
2025-08-26T16:0…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugw5zT_gAeU4284OqNd4AaABAg.AMI8apbH4xyAMIBq6y29jc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugw5zT_gAeU4284OqNd4AaABAg.AMI8apbH4xyAMIIB98m_5e","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw5zT_gAeU4284OqNd4AaABAg.AMI8apbH4xyAMIJ2j3vBnX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzzWcKN6Ll5KO5WKJd4AaABAg.AMI8Ob4O-EaAMNZXkFU10B","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzfbP0K-G9VWxIogXd4AaABAg.AMI83iKsIOoAMI8iUpE4i4","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzfbP0K-G9VWxIogXd4AaABAg.AMI83iKsIOoAMID-_Tn-e0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzfbP0K-G9VWxIogXd4AaABAg.AMI83iKsIOoAMIFlmV9uPT","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxdRRrnpshkRrHDFUN4AaABAg.AMI7uX0AP0wAMILJTIvh28","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxdRRrnpshkRrHDFUN4AaABAg.AMI7uX0AP0wAMINkTUVeGl","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxPHUyCzebXu4oK_Q94AaABAg.AMI7iXadpOtAMI9S-XYTDl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]