Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Technically, AI could be trained only on public domain works BUT no one would wa…
ytc_Ugw_GUNfK…
G
LLM are not actual AI it is more about a smarter Google assistant or Seri becaus…
ytc_UgwqUE-4A…
G
I feel like the A.Is are playing "dumb" or atleast don’t show case their actual …
ytc_UgzBGHDG9…
G
im a 20 year professional illustrator. i do comic, hentai(porn) furry commissio…
ytr_UgwzZfg50…
G
I am reminded of the Robot in Lost in Space when it had a meltdown😂😂. Dr Smith o…
ytc_Ugyj-mt9n…
G
When ai is weak they have nice words
When ai gets stronger they're words will …
ytc_UgyDlgXwv…
G
3:04 like AI Narrators
3:42 Relevance is not context. AI still doesn't do contex…
ytc_UgwqXMfJz…
G
AI is leading to social credit scores, digital currency and digital ID, this al…
ytc_Ugwj2UKsC…
Comment
I think the things I’m most worried about at the moment are ubiquitous ai surveillance and information control to an extent resistance to autocrats becomes impossible, mostly because you don’t actually need AGI or super intelligence for that, it’s already possible, it’s just a problem of building the infrastructure to run it, which is already underway.
Super intelligence I think may be possible, but I’m less worried about it, mainly because the steps to get there from here are kind of vague and abstract, and the hard philosophical problems around it typically get hand waved just to be able to even talk about it, so it’s not clear to me that we really even have any way of assessing claims about it at all.
youtube
AI Moral Status
2025-10-30T22:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwiNphKFW9X1-QaJ-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRjAa1xY9Z5cAgqhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHJqxEZwW92ojEIM54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKEyRf9Efg1gtDGVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzsz86Dgtuqvi6ELtx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwqp2A-ZgRV4MaerRt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwJPHWUcnvJotZFqnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2h4n1cyMj8mxDYGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAJ2kAfyBWrCvGR6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgymcRj0Dpo-ThynfKx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]