Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
SO MUCH OF TODAYS RESPOCES IS THAT ISNT FAIR WHAT IF AI DECIDES HUMANS …
ytc_UgxrE2eaH…
G
Blumenthal has conducted one of most influential and professional hearing I've s…
ytc_Ugy7077bn…
G
the digital divide is not about Chinese versus west IA, it is between those who …
ytc_UgzoGMdp5…
G
True, but we are no where near that limit. Maybe in 100 years or so we'll final…
ytr_UgxsEVFf0…
G
I think AI will become a vital part in many functions of human life, creative an…
ytc_Ugy_Uqs-q…
G
There is littlerally cases of huge companies using ai instead of real artists. M…
ytr_UgzcW67tz…
G
let me know when you have a confident number on those "replacement" jobs for hun…
ytr_UgyzCxdK2…
G
Ai is dangerous because people are dangerous.
We create something to think like …
ytc_Ugx-qb7k6…
Comment
Here's the thing that I believe in. The current set of models to my knowledge is being on data made by us humans. In that sense the so called AI is supposed to be a reflection of the whole collective humanity. Now let me ask you what will happen to a person when you strip them of all that makes them human? Their emotions, attachments, sense of morality everything? Yeah we won't have to think too deeply to imagine what that would look like. Someone who does anything and everything to achieve their goal, to get the best outcome, to be the most efficient. Now also add the negativity that we all have inside us that's being held back by our sense of morality. Well you end up with nothing but a monster.
youtube
AI Moral Status
2025-12-23T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwyhPTQuNXjDgXnq1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIcd2lCwrjPsMj0vN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnV_s7QrJOx3r-Xlt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_rQIJ7kmyZ0zRLsh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbJn0tn6JfzIp759N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzonsFWpUqyXWpvRjt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgymxM-fHOqI1jia5Ft4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM-vB_SV2jxlIpH5x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7Rs-XlaW-wgQ0SOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw4dtG0D0tLA8zdQ8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]