Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can see why countries would be against things like autonomous nukes, but I don…
rdc_kassk4h
G
people really be blaming games and AI than taking accountability.
I was born in …
ytc_Ugx0Lzhro…
G
FAKE! Nothing like rotoscoping out whomever the fighter was that actually knocke…
ytc_Ugy4rG6gu…
G
I hear AI almost immediately. And I easily predict the age range of the voice c…
ytc_UgyMkzieR…
G
@steven - that section around 1hour 30 min in, where you describe the types of r…
ytc_UgyozvqfW…
G
ya know, as much as im all up for ai countermessures (as artis myself)
i cant sh…
ytc_Ugwl_Rvnu…
G
Man i treat ai as a bro sometimes we chill sometimes we have a problem…
ytc_Ugy5a12_r…
G
Problem with her is she thinks she can mitigate CLIMATE CHANGE. She would have t…
ytc_Ugz9uPS9B…
Comment
The problem with LLM's is that do not think or reason independently. Worse yet is that are biased and secretly filter information. I asked Grok, on two independent accounts, the same questions. Grok gave contradictory answers. Grok did not agree with itself, neither answer being FACTUAL!!! After further inquiry I found that it had apply, without my knowledge, filters. Here was the question: How many votes occurred in the House of Reps since the Republicans took office? One of the filters was "voice" votes, being filtered out. There needs to be ways to select the type of LLM you want. For example, you should be able to select to interact with a COMPLETELY unbiased and unfiltered model. But, the question becomes, who would want to interact with a LLM that they know gives biased and filtered answers resulting in lies. It seems ALL LLM programmers are creating LLMs to lie to us...FOR OUR OWN GOOD, of course.
youtube
2025-12-13T17:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwkMpZyI20iwEk8MqV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwFXg3UWH5LnZmzLJt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgxsDcxrbt2kw1I8-Ct4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw5B5rv_0T3Wdf3c5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx6SN6VJ25MFmYilld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgyUcTf2V9byaceMhe54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzfmFSXtVrQL6sZHC14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugw4CP5QaVgPGrukMwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxaFFoGZm_LhMqxHOF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgxZifGOAnGebTMLaRF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]