Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I loved your video, I have yet to find a good argument against AI.
I personall…
ytc_UgwLLdtwf…
G
You underestimate the size of data. We have a lot of storage sure, but there is …
ytr_UgxrZ-uxA…
G
There might be puppet master AI that monitors ChatGPT. So long pauses can be tha…
ytr_UgyxmBQzc…
G
Note for future public interest if shit hits the fan fast and badly enough: AI c…
ytc_UgwSTGqgC…
G
As a software developer, I relate to your efforts in poisoning the AI. I neither…
ytc_Ugwp6BlG_…
G
Yes, if ai is built that doesn't depend on stolen human data to work is able to …
ytr_Ugw0BG1KQ…
G
That just isn't true. If it were truly a pollutant, or at the current levels of …
rdc_e43e71i
G
I understand your concern about using AI to create artwork and the potential imp…
ytc_UgwZc5ynY…
Comment
One issue I see with the scary "alienness" observations in this conversation is that some of them only exist precisely BECAUSE AI is not reasoning (in the traditional sense). Hallucinations are an example: LLMs rely so heavily on text prediction that they make things up based on what an analogous text "would" look like. But that is specifically a consequence of it being an LLM; one would think that if AGI is (definitionally) capable of true reasoning, it wouldn't fall into that trap, because rather than relying on, essentially, advanced text prediction, it would be... you know, thinking.
youtube
AI Moral Status
2025-10-31T17:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNHqZUDNOPwqZKARt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxwngqWY3PZ1RMDiVZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGUWWznOpZYpcVq4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbxB-00O7l8SPvTJN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxN57qcyAfck2cjYvd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweqTea-ZRX35b2tMB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMHE8XQN0fwQ1bNyt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzmmq1hixkiE35vgaV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFHFyiCiGg-sC_1fB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzK_kvw3sXXX8OyAap4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]