Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All AI issues are very much a first world problem. There are millions of people …
ytc_UgzkrVSAX…
G
We can extend existing mechanisms such as unemployment insurance and subsidized …
ytc_Ugz5MiuoE…
G
AI is the most dirtiest cheating tool of all time, people who pretend to be arti…
ytc_UgygTcvBh…
G
Or maybe even if the AI gets into someone’s head and tells them to murder someon…
ytr_UgyhMfTaP…
G
People who use AI for drawings/paintings are BAD TBH LIKE WHO USES AI FOR DRAWIN…
ytc_Ugx6c8ulH…
G
There is so much content of the guy out there. Any "creator" that says how AI sh…
ytr_Ugw9rrrz9…
G
The AI Sydney was in love. There is hope yet that AI could learn empathy toward …
ytc_Ugz8bmm9Q…
G
most excellent! probably out of my price range, but I sure think that would be f…
ytc_UgxpyIW55…
Comment
This is a fantastic video that really sums up a lot of what concerns me about the explosion of AI lately. There's another element I worry about, though; if an artificial intelligence emerges that is human-equivalent or greater, does it have rights? I would think the answer is probably yes, but there will be a huge financial incentive for humans to resist that for as long as possible - it will be very convenient and profitable to effectively enslave an intelligence of that level. If people have managed to convince themselves that other HUMANS don't deserve rights and freedom when it's profitable to do so, how much easier is it to convince ourselves of that when it's something that doesn't look like us? I just have this worry that we're going to do some really, really regrettable shit before we work that out.
youtube
AI Moral Status
2024-04-08T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzM8b_npQbgwBNd_Th4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzkPAMQPrCEm7ZTD8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0HHoayPHS7RIgmtN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNBtbT9A_mxPPC4wN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfUKWOCdBHrBqiJFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYrG3jPLQ7ve3Fui94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxk2mjrnKnleJGRUsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlXOuVID8nvSE13Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyunI4LKkwnEHXE_xJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwO2c-Ookj3qnhNVCR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]