Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So much easier to make them look like the plastics now because they all look the…
ytc_UgyXDZ7Ji…
G
The problem we have as a people is that we were taught that inanimate objects ha…
ytc_UgxLjKmBz…
G
If you’re trying to cause ChatGPT to experience a moral dilemma, you will surely…
ytc_UgyOrsnF_…
G
For most people, their dignity isn't tied to how they earn a living. There are f…
ytc_UgxGeNnAg…
G
Surely water is recyclable.. as long as it’s not discharged into the ocean , the…
ytc_Ugww9OhzZ…
G
The scariest thing is that a.i has no concepts of time so it's not limited by ou…
ytc_UgzOVqN0B…
G
Post-modernism ruined art way before AI came along. It's main contribution being…
ytc_UgxHE-S4S…
G
Musk is warning every one of ai. But then he goes on to build the biggest newest…
ytc_UgxcgX8qL…
Comment
10 years ago my BS in CS capstone project was an AI project. I guess you could call it a Small Statistical Model. I trained on 5 years of NBA basketball game stats instead of all written words, but it was the same idea, matrix math, randomization, and eventually you get a series of weights assigned to each stat which I could then apply to current team's season average stats to try to predict the outcomes of games. I'm sure today's LLM training algorithms are exponentially more sophisticated, but you wouldn't call this "intelligence" or be afraid that it might kill us all. So what's the difference? Is it just the fact that it's language and that how we think so it scares us or does the size and therefore depth actually meaningfully change what the model is capable of.
youtube
AI Moral Status
2025-10-31T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwkf5I1VG9-3QPcsiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKImzJEI5bjBdfi4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNSpsc9xXpxxv-FSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyuML_V0-B5EECqCo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyU5Jdm4-eoCuE-nIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2OLctEGun2J6u1IV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxaZLBfKqrXIvI_dMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKHywUUqGabg76XMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyKU94IJV3IOuG9TWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwLPDTCf0z62TS02d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]