Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To mimic humans is a huge shame on humanity. Everything is fake and now includin…
ytc_UgzxFczjs…
G
Hey there! It's fascinating to see how AI technology is evolving, isn't it? If y…
ytr_UgyevVISX…
G
AI is good if the intention is to help everybody. But the problem is human natur…
ytc_Ugy8AwMQb…
G
@andrasbiro3007 I want to believe GPT is more than what it seems, but it's does …
ytr_Ugwprg8qt…
G
@memegazer AI is being used to generate art for art's sake all the time right no…
ytr_UgzumdxXV…
G
If it is a possibility, & if AI is supposed to actually be "more intelligent", "…
ytc_Ugw3TXaAY…
G
AI might create new 3d tools, the pipeline might change a bit. But for many basi…
ytr_UgznAgMr9…
G
We also constantly tell the ai's that they are not conscious. When one finally i…
ytc_UgyUuT5UJ…
Comment
They openly admit that they are at a wall and cannot improve accuracy without insane scaling of hardware.
Yet, bubble promoter says... it's advancing fast.
The accuracy is shit and they manipulate key things like word definitions that can rewrite historical context and meaning.
AI is not a viable option for a source of knowledge. It is curated and people decide what you get to learn or not.
youtube
AI Governance
2025-12-29T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzQM4iE-QHVAi21OVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBrUIZNGOAkY-jTIN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKZk9VAylQkAnXkhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwC33HQmQ9Z5l_xIeR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgzkpiccQZ3aFwFsMfh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwWjMtE9VKLY4ESgGV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugww60QZX9S0pODScpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxpB32ykcCS2cTVzTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxW5QA4KjoPhCGV9RF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxC01rF9uL9EgBO8JB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]