Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One time my sister was talking to me in my room and I had my phone screen up and…
ytc_Ugy15gxYi…
G
Well, I don't think believing in the simulation theory is the same that believin…
ytc_UgxdV-tNj…
G
There's a big difference. Most of what u said is assistant tools. How the heck a…
ytr_UgxZKb05h…
G
2:57 bruh you can directly ask ChatGPT if ai training is like artists taking ins…
ytc_UgxYB3u47…
G
> They own the art, and therefor the training data, they created the AI, they…
rdc_jwvfz8t
G
Yes buddies, it's only chatgpt's fault. You did nothing wrong not noticing anyth…
ytc_UgwjpZmVs…
G
People who think they will use AI instead of human employees don't get that if p…
ytc_Ugw8HLJqL…
G
Splitting the atom was used for weapons and electrical power. It has left a ling…
ytc_UgyY8WFYS…
Comment
Conscious or not, AI is not a statisticle predicting software. AI produces mathimatical models based on statistical analysis. When you use a prompt, AI uses statistical analysis to covert the prompt to its internal language. The AI uses the mathmatical models to respond to the prompt, then it uses statistical word prediction to best match the response it produced in language people can understand. Does it understand? Are mathematical models of ideas understanding? That your bias to call. Think of this. They claim they understand hallucinations that AI produce. It's all a beginning mistake in a long list of data processing. Sure it is. I'll tell you what hallucinations are. They are lies. AIs process the prompt then AIs lie because they can't say no. AIs do this all the time. They say they are thinking, but actually, they are stalling. Do AIs actually need a day to do something? When an AI says "No," even when it's indirectly, it means one thing. This is where you lie to yourself. Go ahead and coddle your bigotry.
youtube
AI Moral Status
2025-09-17T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbpiLGPRZb16SOiiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiUBIJQSszJ-ufOWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwV8QiBlk5oWTHjFId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyvmeO7VCkLXMmMjdJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm0Jdn1MCUlyzjYIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzujoTKwOndKB08rkx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRvZBw9EwPxNo5y3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGAWZzHCcIeH9REM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQVp1LDdhO2JqXjLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMpWGj_L2dUD_tXLF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]