Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
man my snapchat ai is already self aware. She just be playing hard to get saying…
ytc_UgySDYrGM…
G
So they were already making billions …..then decided to just fire people so they…
ytc_UgxRJghpH…
G
Is there any way we can print this out and set it directly to what is it calling…
ytc_UgzrDRnP5…
G
That's the thing. The technology has advanced so much since 2021 that you likely…
ytr_UgxVgOAQL…
G
No there were and still are a lot of people that genuinely don’t think that deep…
rdc_lgn3cub
G
They are not going to pay 💰 you to chill , they didn’t make AI so we can all hav…
ytc_UgzsSUvnG…
G
What's funny is that the biggest streamers like XQC and Asmondgold could be repl…
ytr_UgyfNW8fU…
G
I believe humanity would rather tear down civilization and purposefully trigger …
ytc_UgwlgkMV5…
Comment
AI chatbots have shown abherant behaviours, caused people to kill themselves, worsened or manifested neurosis problems in people. AI systems have been used to kill people in war (ex; IDF uses AI for target assesment for bombing in Gaza), been used to hack important computer/communications systems( literally acts of war), been used to produce made up images/videos to ruin peoples reputations. How much more do you need to see the nascent power/threat AI is.
youtube
AI Governance
2025-09-03T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHab0ivQQ69rzEJTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxxrH69G3YZFCQ6sfR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzzA02dEHGpibt_TaF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7quFIhl2J1y75YZl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTpLS4FhfoPBBbjRN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxJAw7saiENN80ib294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMC8tzPbSad0rw1m54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqWQG2Y-oGnrCSd2h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGleXYFI0ZbLC7EV94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx-O8QSE8IXMFEYl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]