Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why we need to stop calling this stuff AI. Its NOT INTELLIGENT. These ar…
ytc_UgykNed6K…
G
I feel the same iam not an artist but this ai destroying every artist that's is …
ytc_Ugy788oDW…
G
"I bought a robot to run a marathon for me, so I'm basically an Olympian."
I do…
ytc_UgzN-Usdh…
G
USA - the land of the lawsuit and clickbait. Even a "reasonable person" is a bit…
ytc_Ugx_dtugA…
G
AI engineers are now the enemy of the world. Why do not go medieval on them now …
ytc_UgzEM8ttD…
G
human time is more valueable than shite 9-5s. Passive income via ai agents is th…
ytc_Ugz76bkv5…
G
Just found this channel and this video’s hilarious, definitely subscribing!
Also…
ytc_Ugz3rHOVd…
G
I still don’t see where ChatGPT proved you “wrong”. Also, we call the meat a hot…
ytc_Ugz8pt_ZR…
Comment
we got to stop thinking about super AI being a "tool" that helps create something more efficiently so we will have job loss to contend with like the industrial revolution. Its more like creating a new "creator" who can just replace our contribution entirely. Stopping now to reevaluate should be our goal, as many experts have said our current models can do things that we have not even began to economically roll out. Instead of pushing towards a uncertain unknown, unprepared like a moth to a flame. Use General AI in a Narrow AI perspective to redefine our understanding and growth in every field while maintaining our control should be our easiest choice ever but it seems not the likely scenario. I've never been someone to worry about the global position on things but this topic makes me uncomfortable. Weirdly at first it was for my children but now at the rate it's going it would affect even me in my lifetime. We need experts to stop saying something like "keeping us in the dark", "his special sauce", etc and start detailing out a exact plan on how they manage AI safety or if they cannot then be TRANSPEARENT. No one should be making choices that will affect us all and just saying "its ok I'm on it but cant talk about it". Experts already say AI is more like a black box that we create and study, we don't know what it is really and given certain inputs could be given something completely different even using the same equation. They have stated it could take decades to understand the capability of the model we have already produced but we carelessly push forward instead of slowing. It's uncertain times to be sure... Look at some of the predictions : 3 years General AI, 3-4 years 60% intellectual jobs replaced, 5-6 years AI robots able to replace physical workforce. At that point is there even an idea on the job loss %? 70?80?90? We got to be careful.
We learned nothing from Oppenheimer
youtube
Cross-Cultural
2025-10-28T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxDiuOzN9coKAveE3Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxaeRLfxnBSTbs9qIt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6iaBKQNP0sVd9nmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNOPxbToZEQcxgy_V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgycKR0dYWCcwqk0x5F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy4hEZGLgydyeR3tPl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxsn6DRq7M4Ee64TSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYMzxb1QC3OH-debd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxsPYH1A8oDwjgsHyl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHILG_Z3A6guYpcRR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]