Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm glad I clicked on this, because you my dear understand how to ask the questi…
ytc_UgwTeBIyO…
G
This is Bs. Like my buddy said, they are peeing down your back and telling you i…
ytc_Ugzp3Rga1…
G
@erikmckoul2478 When there are fewer humans than AI robots, their tactics will c…
ytr_UgxQDTeVf…
G
What seems more likely in all of this is that because of our greed and fear, we …
ytc_UgzBJ2t2P…
G
So you're having a conversation with chatgpt, but then chatgpt starts advertisin…
ytc_UgzZfmgwC…
G
we trained an ai to act like a human, and we're here acting surprised that it su…
ytc_UgxrAreh8…
G
Developer with over 3 decades of experience in too many languages to list here w…
rdc_nbsl4zj
G
The Google and Cisco people are dead wrong. The Internet didn’t make video renta…
ytc_UgxYxtSfD…
Comment
What I want from AI: A virtual ASSISTANT (not worker) that can help with organizing my work, self driving cars, robots that can do chores, a better spell checker that can analyse context, etc.
What I don't want from AI: Generating entire projects based on stolen work.
If you want to do make an ai that can generate art, PAY for the art you use as a data set or use art owned by the public domain. I am a huge supporter in pirating, because I like to stick it to netflix and disney. That is still stealing in art, you can argue on whether it's actually stealing or not, it's a separate debate. But CHARGING for stolen art, which is what most ai generating platforms do, that I will never stand for. You should be able to appreciate art regardless of your wallet, but you should never be able to make money of other people's work without their consent (which should never be implied).
youtube
Viral AI Reaction
2025-12-02T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz6PIEPvB-yLFRTU9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzMgm1VOixomIsFB4t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCWFyoz3HZ-Zpk49N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRvHe9Bejb8WDWG5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxphrfGJBeDflRcvmF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzWCJQXiSOISZima554AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJ5LEj4_M_KSi5uZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxMvPeqlAm7vJBozPR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVgiUkWUF-UaP58Gl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8CyQTF50sRLSFiyF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]