Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m beginning to think that Zuckerberg is a robot (look at all the pictures we h…
ytc_Ugze_aGBj…
G
The purpose of this ad isn't to sell AI companies, it's to cause fear of AI amon…
ytc_UgxMSh-sr…
G
@BrandonClark-StocksPassportsThat's what people said about AI replacing entry l…
ytr_Ugy44tBdc…
G
I find the rhetoric from some of the AI leaders a bit contradictory. You can't, …
ytc_Ugx7dfTBP…
G
Were already at A.S.I. REMEMBER THE MILLITARY IS 30 YEARS ADVANCED COMPARED TO W…
ytc_UgzkvmL2T…
G
I disagree. I think AI is inefficient and unsustainable as it consumes too much …
ytc_UgzrDd6MA…
G
Exploring the limits of AI like ChatGPT can be fascinating, but it's crucial to …
ytc_UgxUqeF3o…
G
Put them head to head. No system is 100% safe. But FSD is as safe if not more …
ytc_UgxbVGZ5g…
Comment
This looks like a video by AI, about how AI is killing the internet and humanity. Then I read the BIO of the channel. It could be AI just lying, but 70 people? That's some serious work. I fully agree with everything in the video, and I think it even goes deeper than just slop on the net. What about the AI camera system that detects things that look like weapons in schools? The cops show up and find out it was a chip packet or something totally fine. what happens when the AI is also the enforcement and there is no human in the loop to recognise these mistakes? What about medical slop, decisions made on someones health based on an AI diagnosis AND and an AI remediation for that? what happens when the AI lawyer or judge gets a strange case and decides the fate of a life? I think we are in real trouble, people.
youtube
2026-01-28T13:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzK4QpBXkJXedrx5yx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzlUy08WYzr016GmZx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugx-6-zMQ7ZyAmY9h354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxbb1jEPLWLU_Gf2HB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzNysH4v50032NDOo94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyfeyVe_f-qWJmUeH54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugz6SwCvr7ygXXWodd94AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzBk2uWdy52Ibazolp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgyT0T2TzGVLr3PgL_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgzrBV3DGnJ8qrOmio94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}]