Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This may work in the western world however Africa will likely stay the same. I l…
ytc_UgydmNxJF…
G
Why even promote this idea with this extremely wasteful ai slop video ..? What a…
ytc_UgwhjBx0O…
G
AI isn’t a threat! It’s a tool!
“There’s nothing wrong with having a tree (or AI…
ytr_UgzjTBX8Q…
G
The issue is not programming sentient robots. The issue is programming robots th…
ytr_UggvblKpw…
G
Honestly, I would trust AI better than any human, to analyze a target and decide…
ytc_UgzEO6XHA…
G
ChatGPT isn't smart enough to have the answer to so many of these questions 😂😂😂 …
ytc_UgxSu6qC4…
G
It's not "Ai Art", it's Ai generated images. Ai will never be able to compare to…
ytc_UgzhBShcq…
G
i cannot wait for the ai term to stop. its the meta of derpy skibidi…
ytc_Ugzmcz7tK…
Comment
Two things that are always overlooked by doom-sirens like this are:
- Hallucinations,
- AI lacks forethought.
HALLUCINATIONS:
A 'junior-level' ai WILL make obvious, common sense mistakes.
A 'manager-level' ai WILL miss picking up on these mistakes.
An 'executive-level' ai WILL tell these mistake-makers to duplicate their work.
In that hierarchy of operations there are literally millions, tens of millions, possibly hundreds of millions of tiny factors and judgements and decisions made across a multinational company every single day. Let's say 0.001% of them are hallucinations. That's still 100,000 hallucinations per day. BUT, since it's AI and it does it a bazillion times faster than humans:
- It's actually 100,000 hallucinations every hundredth-of-a-second.
The companies who rely on ai WILL crumble; it's inevitable.
Right now, the same things DO happen but they happen over months and sometimes years, meaning there's time to adjust and adapt and fix and what-not. When it's AI doing it, though; it happens over hours, minutes and seconds.
FORETHOUGHT: (lack thereof)
AI cannot and will not accurately, let alone reliably, predict future outcomes; it is 100% reactive to everything in the world and not proactive.
As the company spends 30 seconds driving itself into the ground; it doesn't realise what it's doing until AFTER it's done it and can look back and REACT to what it did.
AI speeds ahead blindly because it has no eyes to 'see' ahead of itself. It reacts to what HAS happened.
It will never come up with an idea by itself and then make the decision to implement it because it thinks it's a good idea - It has no concept of 'future'.
CONCLUSION:
Every company who actually tries to follow the path of these hypothetic companies in this thought-experiment-video will ALL fail.
(Ironically, humans would learn from those examples and choose to NOT follow that path - In that way, it's self-correcting)
- But, yes; it's still going to be a job-pocolypse in the short-term because people are slow learners and AI is still an incredible tool that humans can use and companies absolutely WILL use to replace like 60% of us.
youtube
Viral AI Reaction
2025-11-24T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz9EnbaYqsZw-JZc1V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwtrwB5tv2biVzD3Mh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyuRq1SL7dYUcaZL594AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzw5PW_uiDbnCGs4ph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLTKzzCSnILh-47aV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx-McQpqV6zR44gN2B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1DqBlXTLqB4L79Tp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwwuol9az0DQEZOaEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhEYm8iPJHTbssIoB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxEy7qatnUpQFnPvOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]