Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The bad part is a human did this to this dude I’m sure if this robot hit you it’…
ytc_UgxSfKbrr…
G
Ohhhh so cute 😂 you are now blaming their education instead of blaming Artificia…
ytr_UgwR0robs…
G
Two things: No AI has "read everything a human has ever written". I prompted Goo…
ytc_Ugxmu--4a…
G
@thewannabecritic7490 how can you repeatedly say so many thin…
ytr_UgxxSPRIR…
G
Doritos locos. It's no wonder ai couldn't tell it's a doritos bag when you see a…
ytc_UgwWULqHj…
G
Teslas are $45k. HM is that? Do they have robotaxis? (Google says the cheapest M…
ytc_Ugxl4JhXN…
G
Americans need to pay attention! We have a lot of AI taking place here already.…
ytc_UgzGsfeD5…
G
This was an awesome show, Moonshot Guys. I was listening to hear your perspectiv…
ytc_Ugzu62307…
Comment
The defense for ai, beeing "it does stuff, so you can spend your time doing other things" is so weird.
When you arent interested in the thing enough to learn it and need to use ai, then this thing isnt something you should do in the first place.
And using ai just so you dont need to learn, feels so wrong, too. If people stop learning, because ai does the job, then there will come a time where some of us really are so dependent on these software, they cant do anything without it and thats a bad thing. Like you said, using ai do help us work, like in medical cases or maybe ask chatgpt something sometimes or your ai companien thingy (alexa, siri) how the weather will be is fully understandable imo. But not use it and stop working/learning just because ai does things, ai should improve things and not fully do it on their own.
youtube
Viral AI Reaction
2025-05-03T12:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyHXYVCGdz5LMrN-Wd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzr9c0WYrNaLk_T4Xp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyTYrF3Pa74H1Ae0Ch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxt67XUPydIAuilM7F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQpRG1IeqRoFQrEPh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgzU656FqBIuXqnvYbZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxBp4OXWoO20brznXt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrjFpCweDq568PJXB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzso9D8-K3OT6s02WN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwUnZDYdDyKUQCaxJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]