Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My problem isnt that ai is "easy to do" and
how "anyone can do it", my problem i…
ytc_UgzhBy-SJ…
G
@sofiabravo1994no. Humans are suppose to respect each other so humans don’t use …
ytr_Ugw97_pMS…
G
no, we shouldn't oppose automation, or increased productivity in general. We sho…
ytr_UgxG7iGnZ…
G
not blaming this child because he had obvious mental health issues, but if you'r…
ytc_UgxSB8beG…
G
AI art has drifting issues as well and is still struggling to stay consistent, r…
ytr_UgzWjaqtv…
G
@DilawonDandek sorta yes, sorta no. All machines need maintenance and updates. …
ytr_Ugw_N2ppW…
G
I feel that people are putting inspired people who put in the effort to learn th…
ytc_UgxYzajxj…
G
Ai is going to give the most generic bs. But the writers haven't really been doi…
ytc_Ugw3LLdzT…
Comment
As concerning as AI can be, the truth is that misinformation, lying, editing, and simply normal framing, priming and construction have always been more than adequate to achieve the goal of misleading people.
The problem is not that a new powerful tool exists, but that there has not historically been any meaningful regulation of any aspect of the system, and voters are not equipped to navigate the informational landscape, and never were.
Humans don't evaluate facts or arguments independently, they cannot do so, they lack relevant expertise and knowledge. They determine whom to trust. Despite how tenuous that methodology is in today's world, it always was. Again, it's never quite as new as you imagine. The big problems in recent times have not been a new trend in information navigation, but rather feedbacks and cognitively available compulsion-feeding sources of affirmation and conflict. That is to say, people can easily and repeatedly have access to confirmation of their views, affirmation of their righteousness, and examples of people they want to be angry at.
This consistency and availability maintains a high level of emotional arousal, and it diminishes self-reflection and critical capacity, because people are, frankly, too busy engaging in conflict to evaluate their own positions. It makes people more defensive, more anxious, and less informed because they are more guarded about competing views or values, seeing them as a threat to their strongly consolidated perspective.
They also are more strongly engaged in enclave polarisation activities, such as purity testing within their peer groups, which are more rigorously selected for agreement.
The point is, the problem isn't more sophisticated tools for fooling people. The problem is that there is very little epistemic authority that people who might disagree on topics can agree to trust, and people in general are less and less interested in attempting to gauge the trustworthiness of sources, because they are more concerned with agreement than validity.
Basically, it doesn't matter how you deceive people, if they want to be deceived before you even get started.
youtube
Viral AI Reaction
2024-02-24T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxFCNU8_n4UJKt1WtN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFOtNAGw-VetBWLgp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxV2MsBem071PaXUBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyK5eKapBBzm4v4yA94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykqKt9-JJTo56xCu14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxFPPmkPBDpqfLnzwN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxV4xRAn2lZjaL6Dfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxt_0B13JpIypJTUmp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgywDpH7dInIk-Vs3Hl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxzNlx_3ayEn8tcdCt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}
]