Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You bring up an interesting point about wisdom and intuition! Sophia does emphas…
ytr_UgyMWmk1H…
G
I mean it has access to the sum total of human digital data so it knows what we …
ytc_Ugzr6uIPP…
G
I see this as meaningful, but the “Gorilla” analogy makes no sense. Gorillas ha…
ytc_Ugxe976uw…
G
Eh, LLMs aren't leading to AGI anytime soon. OpenAI is just gonna run all these …
ytc_UgwRgnSqi…
G
Ironically DoorDash and others food delivery services skyrocketed during the pan…
ytc_UgwJc3Tqf…
G
Rally, meaning a fraction of the humans survive? I’m not interested in living in…
rdc_n0hcfnl
G
I'm really thankful to Eureka for making me understand basics of AI.
Can I pleas…
ytc_UgwP7gCiV…
G
Learning to manipulate an algorithm to get desired results is not the same thing…
ytr_UgzXfKyxs…
Comment
The fair use argument makes me really angry. I'm a software engineer by the way, I use AI in my daily pipeline, because it's really good at finding information for me when I'm dealing with a new API especially. I could go on and on about how AI is not actually intelligence but that's a completely different discussion. I would ask this guy, if I were to take all of Brian sanderson's books, and then rewrite my own work using nothing but his words and sentences, should that be covered by Fair use? Most people would point out that this is plagiarism, but apparently not this guy. When you ask an AI to generate a photo, every single pixel came from someone else's photo / art. Being a scrapbooker, is not the same as being an artist.
youtube
Viral AI Reaction
2025-10-15T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx12AJRztQ0WxnLpZF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4fcdl0YF1WhftpsB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLW4hQGLLNo95hga54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEqs2g2RBwIZ9xV7l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwsa5R1oYKp1LCF3yZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxdyAVnkRf0gBJM0Sx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwITNJUxBh-vyi0IS94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxRJq06kgpNR7H5D1Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyTolqhIuDz4ft5kAB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4q06B0bqCcaG-6Fp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]