Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i will only accept ai 'art' as art when robots WITHOUT direct human instruction …
ytc_UgzDMJ5jn…
G
AI 'art' is not art, it is a tool. It has it's uses, but it lacks humanity and d…
ytc_UgxIwIhBj…
G
We need to share with AI the lessons humanity have learned including integrity, …
ytc_UgxgBhKzc…
G
@neociber24 I wouldn’t refer to speculation as conspiracy theories. Right now …
ytr_UgwIUOmcp…
G
NNs that are trained on human drivers, by definition, mimic human driver behavio…
ytc_Ugx35WVwW…
G
Hey NeedzTz! Thanks for watching! If you found Sophia's insights intriguing, you…
ytr_UgxjtPNiV…
G
If you know how to make money, AI can also navigate to do the same task If you k…
ytr_UgyZJiKqZ…
G
I can see AI taking on the appearance of the hive mind in "Raised by Wolves".
I…
ytc_UgzRJ8Cpr…
Comment
First of all there is no such thing as true AI or AGI as of today all of these supposed Robots doing peoples job it doesn't exist. Now people seem to think ChatGPT is some sort of sentient being it's really just LLM model aka Autocomplete on steroids. I use these LLM's on a daily basis I'm a software engineer and even the best models hallucinate after just couple questions there is no thinking going on it's just querying from corpus of huge data , it's just a tool and a tool is only as good as the person who is using it.
You cannot achieve AGI by scaling LLM's if you scale a piece of shit it just becomes bigger piece of shit I'd say we are still loooong way from AGI
youtube
Viral AI Reaction
2025-11-23T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz7aivzDZqGeeYkSw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgykrEGQ491RlvEX1j54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrIYVMgxAIka0UUid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIfwUM6heF__zFda14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPHC1SNTab8Lo7Tst4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwo4GTCRuWAffQ4XyN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7WEQ8Cij7IDKijYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxaE365fKMIPP_lxF14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqcdSshqmIUXV4bWl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgySQi3KlgJEfw1hEJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]