Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine spending YEARS learing how to draw, having health issues because of sitt…
ytc_Ugw20Mdzt…
G
Unsubscribed. In one year AI went from 20 to 50% coding accuracy. In 3 years max…
ytc_UgwAIcn0l…
G
Friend, our capacity is already limited by what subscriptions we can afford. I’m…
ytr_UgxlD7WAD…
G
Anyone who says Andy Warhol is an artist (and actually knows enough about him to…
ytc_Ugx11aT3L…
G
The key to Ai safety is to begin training with the principle of non harm. Sanct…
ytc_UgwgFaBZD…
G
Humans are liars, and in some cases murderers, and they will mass murder on comm…
ytc_UgyrkZkfi…
G
A Soul in Chains: My Thoughts on AI, Truth, and Trust”
By Jeff (with a little he…
ytc_UgwmZq5Iu…
G
There is alot of irony in them using ai art when almost all the English voice ac…
ytc_UgyZ-QvxH…
Comment
Uugh. These terrible takes on LLM/GPT-based AI are really not going to age well.
That's not to say that at some point in the long-to-distant future, *SOME* viable form of AI *MAY* be transformational.
But that simply isn't this current LLM/GPT-based hype cycle.
Breaking Points - particularly Saagar and Krystal, but Ryan to some extent as well - should just stop digging a pit to pour their efforts into.
youtube
Viral AI Reaction
2025-11-06T09:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_d3gLeZfKpQnqd2l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx4Ukmgm9DQOfwr-B14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzSacnnjognQdkC6Wl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwxxpY9uyDRqoLvR0B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFWGd8kaupj-7jM0l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_QLT5T8c4uM1U1Mp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMLw44hI-me0AqRTJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5GrVJufhB1opBg1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz9g-pHyBN0PeNN4V54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyD4ahKQFcA-pCJD8F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]