Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is clear that AI search engines are "under development", so the information i…
ytc_Ugw1Gzkue…
G
AI won’t do shlit. Its humans that pull the strings all along. They just need a …
ytc_UgwutwqOp…
G
I don't think we should just assume AI will get better and better. I think it wi…
ytc_UgzovZCrU…
G
Call me crazy 🥴 okay, but wouldn't be easier to hijack a driverless truck, .... …
ytc_UgwbaD4SS…
G
When an end user interacts with an AI model, that model doesn't change and isn't…
ytc_UgxVEd7IR…
G
I think AI art is interesting. Not for people who enjoy art but for scientists l…
ytc_UgxGFGYAP…
G
I had one too (not AI but they used a real person) and I was like. mane im too b…
ytc_UgxGEm4Op…
G
The irony is , I’m having Gemini summarize a video about it being an alien monst…
ytc_UgwUc2HmC…
Comment
I still don't understand how AI could harm anyone. It's not going to crawl out of the computer and start firing off nukes. What would be really cool is if AI somehow permanently destroyed the internet. Or like, sabatoged it in a way that took a really long time to get it working again. I could see how some people might see that as the end of the world, but it's probably for the best.
youtube
AI Moral Status
2025-12-11T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxPNcZga7pERXvs0pB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKOnoocdpdLNK0DIN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqAtNuS-32E9sZr0J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxx5xMaMtWKjNP0NZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEjbMXbp6-L1pvdxZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8bbDqn9aCpgGSE8V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxcETYs4bCE7yDtf7d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1vcrE4k_1-YEsQK94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzaZWmQ8e3Q9Rb1pOJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0GyFjeVSvvD0bGG54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]