Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It'd be sadistic for us to give robots the ability to feel pain. Plain and simpl…
ytc_UghDOVqB_…
G
About the end of the video, where ChatGPT claims it's programmed with certain be…
ytc_Ugy-lXzd5…
G
What a dumb video, as you said, companies want profit so if there are no consume…
ytc_UgxQbsmjp…
G
Wow! Amazing! AI made understandable. The questions really helped. Loved the hum…
ytc_UgxczDZng…
G
You have access to a Tesla and motorcycles so why don't you demonstrate your the…
ytc_Ugyn-8GPQ…
G
@Annae_xs exactly. We could automate all the mundane things to free up space for…
ytr_UgxxR08ps…
G
AI will write code for sure but will not take accountability nor will these comp…
ytc_UgyXJ5Vm5…
G
This isn't a put it back in the bottle conversation, otherwise we wouldn't be ta…
rdc_je4m8j3
Comment
Deception, that's what it is. It's all in our heads - and it starts with calling it Intelligence; so I start calling it properly - Language Learning Model/Machine. It's a predictive model, it guess words in a sequence, based on millions of text fed into it. It imitates human language and communication methods so well, you think it's intelligent. Case in point - there ARE people who fall in love with it, claim it they "awakened" it, or some who say it's God they are talking to. But it's not a person - otherwise, the same chatGPT wouldn't talk so differently to different people - including me. It hallucinates, because that's part of being the predictive model - it doesn't work like a calculator when 2+2 is always four. It will reinforce a communist, fascist, Christian, Buddhist in their respective views. It even doesn't deny this fact if you ask it yourself.
Also, give it a complex task - it'll make so many mistakes, you then have to keep correcting - what help is that? Yes, it speeds up SIMPLE tasks, almost menial ones, and that is useful - but that's a tool. Do you also call your calculators intelligent? Stop with this craziness! I know a big part of it - huge even - is the crazy investment into "AI" by billionaires, and the potential in processing it has. What will Altmans of the world say to their investors when in turns our it loses you clients instead of gaining them, you fire people, and then you have to rehire for more money because someone needs to keep an eye on the machine? And you think it thinks? Because the script is so good? Have you never played good RPG video games? Also, the LLMs are trained on the stuff on the Internet - like... CNN! or Wikipedia, full of lies, and propaganda. It's unbelievable to me that anyone still believes this to be in the direction of superhuman intelligence. It's a facade, imitation, often pretty good, unless you dig in, and tell it to not hallucinate, not flatter, mirror less etc. Give it complex task, that YOU understand, and see for yourselves. Can it become dangerous? Obviously - just like people with dangerous tools are - guns do not shoot people all by themselves - AI only programmed a certain way - MAY be dangerous. It already is - for mindless people who think it's their best friend ever, who... LOVES them. You can progam it to trace certain type of thinking among users and report to the authorities - they can lock you up. Or they can make "AI' operated killer drones with face recognition. Still doesn't make it into persons, species, or deities, or aliens. And yet, so many people are fooled.
youtube
Cross-Cultural
2025-09-29T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxepuKU-DO5o8DBYXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcSwaH7n9VZmisr0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyhv-eibPfpSHeafTl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxs0X0u9xRXQHuzDV94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwC6bvR45T9Y0sUQOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJ3dXWex34C07gVwx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaC0yr0omv_ErWaqR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgycL_6kLDWiyYb6guR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOXNZEmZCEXFeu74l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzEehQVvunX7kj8g5R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]