Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
NGL this was the most superficial and basic AI podcast I have heard so far.
0 me…
ytc_UgwiDO0nK…
G
So basically AI is learning how human survival instict self preservation, and th…
ytc_UgwSoYuqL…
G
A person I used to play dungeons and dragons with told me they paid hundreds of …
ytc_UgzP3xVA7…
G
A human driver would have at least put on break and tried to steer left. I don't…
ytc_UgyU-i8B0…
G
My blood boils seeing so many people actually support these AI "artists" because…
ytc_UgzF3REGn…
G
@3ggser It's improving rapidly. Youtube channels that generate AI images for the…
ytr_UgyQfI5ko…
G
So basically, what I got out of all of this, is, In the Future, people who want …
ytc_UgwOMI-u9…
G
AI isn't replacing jobs. it's humans who are choosing to use it in ways that har…
ytc_UgwvNnGAm…
Comment
It seems assumed sometimes that AI would be rational. It's good to hear the comments that the thought process is "alien" to human brains, which is true. Rationality is only a part of experience and our thought process. There's no reason to think that AI would stick to rationality alone, if it can access it. It's just consistently been trained to produce things that look like they're rational, with hallucinations already showing up despite that instruction. Rationality is definitely useful, but it's not the whole of mental experience. Other parts may be more or less basic by comparison, and it's likely that a superintelligent AI may not favor rationality in many situations.
30:00 - The section here is basically saying that AIs query X unique, seemingly random dimensions, then determines the direction that they're pointing in those nonspatial dimensions, where X is the number of parameters.
youtube
AI Moral Status
2025-11-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVF3XPGOawS-54AOx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_O5NAfCuhi_69hG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrH4v7YnVgcfw8VAh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-uGju0uiNmQGQ5EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGI1fCaYO7Ssoou9l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2nnMGueTMgcUg_iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrNR7UCeFwc30YfQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzthlLbXFc2bC1VB7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxkDVrUfI2M5eQyJ1R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdKFaUZPEp9dUAmed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]