Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just saw across my screen they want to lock my phone why now I will cry 😭😭😭😭…
ytc_UgyMGyck0…
G
As much as I think that AI could potentially provide some fantastic tools for ar…
ytc_UgxCdUdPp…
G
It's all bullshit scare for the civilians I just had a decent convo with gemini …
ytc_Ugy5kMAHA…
G
Why do you think Elon is making a chip that can go in your head to make you into…
ytc_UgzFqxqkC…
G
I don't quite get it, but Gen Z/Alpha seem to REALLY HATE AI... Particularly on …
rdc_n0kshvf
G
This guy doesn't get it. When wooden rubber tires were invented, horse drawn car…
ytc_Ugyyz0QsL…
G
Max kept asking to quantify the probability of AI becoming an existential threat…
ytc_UgxGaW9p1…
G
AI is here to stay and while it may be regulated in your country... I can name 1…
ytc_UgyVNLLND…
Comment
I disagree, I think current methods will give rise to AGI. Give this a read:
How do we think? It's either visual, sound, or language. If I ask you to think right now, you would either come up with some words, sounds, or images in your mind, right?
The current AI models are also using the same 3 inputs for training right now. Yes, all they do is predict the next word/token, which makes us think that they do not really "understand" language. But isn't that what we do too? Don't we just predict the next word, generate sentences in our mind to "think"?
If us being intellligent, concious, thoughtful comes from us being able to understand language, and understanding of language comes from being able to come up or generate sentences, then I think Neural Networks will indeed lead us to AGI eventually (Probably in 2 years or so).
It's just that the current method to develop AI models will likely need a lot more data, compute, energy and time to reach human level of understanding and generalizability than a human brain does. So even if AGI would be as smart or slightly smarter than a human brain, it would not be nearly as efficient.
But once a single AGI is achieved, it will likely quickly create multiple copies of it and work together to come up with a much better way to develop AGI which will require much less amount of all the aforementioned things.
Thereon, the advent of ASI will be pretty quick, obviously.
youtube
2026-02-14T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz1AId5aTrB0vU068x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwilofhGKhwDu0L7Ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"},{"id":"ytc_UgyFYXqIPZgSqF1BRIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-tXmr1Qbf99MFrdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy6ZM9msFzNFdQRhp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzGNsSq1iKLhPOIyGV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxWCzblxI-AOE55SLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxdU-TuCxJaaCrlPWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzRtXhqWxt8hXpcZyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzGjyzcKfKocvAvCr54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"]}