Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is bad to use Ai to see my errors sometimes or even trying to understand what…
ytc_UgwDprFxy…
G
I love this! Just getting them involved with running a food truck alone teaches …
ytc_UgzsysDPv…
G
Predictive algorithms can't account for how human beings change, for good or bad…
ytc_Ugy2g28mj…
G
A huge part of being human is contributing to society, having meaning and purpos…
ytc_UgyjmrtNb…
G
The bananna, is a great example, or the buckets of sand falling over. Or the "ta…
ytc_UgxQihxxf…
G
Your show often talks about very complex things in a simplified yet understandab…
ytc_Ugz3Kyk2i…
G
This NEEDS TO STOP!
The AI chip in your phone is used for Data Collection, don’t…
ytc_Ugzp3H_Db…
G
Just because you can doesn't always mean you should. I am a fan of Elon but I do…
ytc_UgwJIQwZu…
Comment
LLMs are literally just con-artist-machines. They have studied how it looks to do things correctly and can mimic that behavior very convincingly without actually knowing the underlying logic or meaning to their output. And as you talked about, they exude confidence and present their hallucinations as facts because that is how their input presented information as well. Whether or not the information is true or relevant is not merely secondary but fully irrelevant to an LLM
youtube
AI Moral Status
2025-10-31T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyZZEpDQ4Fol_rRz3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnhVMdx4H5KG97R914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgoTu7UFS3CUEDwlF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8w9Zsyzc24y2przp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxtR4Pt8nUMCs_ZJ3x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyC5Gw2e__-OdtBDZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydqfQICatDtEr9AZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrDlVgZczTRreG_al4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXw9i7ZA1Aq7C_Q0F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmnECZLmYxsytfsqR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]