Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why don't you spend some time doing actual journalism instead of regurgitating A…
ytc_Ugx7XkSaJ…
G
@TigreEthereal Bro, it's not the end of the world. Life goes on. Survival is abo…
ytr_Ugxn39hqY…
G
I really think ai will make the "expert" job way less valuable and may actually …
ytc_UgwUnGwHU…
G
The Only thing that AI is doing good for humanity is ruining Satanic Hollywood. …
ytc_Ugy8tdJno…
G
😂😂😂😂 I don’t laugh at you. I laugh at the situation because a lot of people real…
ytc_UgxQkdDL9…
G
The people who made nukes thought it was going to be the end of the world when p…
ytc_UgyrsrOfR…
G
You can tell it by the fingers just Count them ai can't get fingers good…
ytc_UgzIgqjP6…
G
13:25 I’m at that exact same spot as he is…
I don’t even know what to tell my b…
ytc_Ugw9bBKr7…
Comment
I've found that the Google AI generated answers on search results are often just completely wrong, or it contradicts itself. For instance, I wanted to know which characters in a game called Unicorn Overlord were most popular to execute (you can recruit or execute characters after fighting them) and Google claimed "There is no execute function in the game". Then, later that day, I searched for something else, and it basically said "that's not correct, but it is correct, even if its wrong".. I want the AI moved off the search results. It's not worth the misinformation to make it "convenient".
youtube
AI Responsibility
2025-10-02T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzClBw2yffnbmDqsHV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwXqKcgBFSXWSysPad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYyWAQHNYxWORHYBt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7PBudyjcQt9itsbZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzuP4ikoNS3i4TgMYp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzn8q8ySjm9C6EVvbN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwxjvRafMoimT5GgrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy_xARqU3uDx-Ops-Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQMmMUSy-pEDOA3ol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzjDzJ6v9yyY6hSQLV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]