Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will only take over jobs that you could train monkeys to do. AI does not have…
ytc_Ugxv0S8qV…
G
AI is supposed to find its own solution for that. If we're nice to it, it might …
ytr_Ugw3wdX2A…
G
Tax payers will wonder why education is cut this year... Paying for bad cop beha…
ytc_UgxzwgP72…
G
I'm not an image artist but find myself in need of creating visual depictions of…
ytc_UgwOF9qgs…
G
I'm ngl I have not heard ai support from a single disabled artist. And yet that'…
ytr_UgzRx43-2…
G
I think one of the most important aspects of being a successful software enginee…
ytc_UgxAUzPtG…
G
Gone wrong, I don't think so. It looks like it's going damn good for the robot…
ytc_UgwwbgBE-…
G
emotions is what makes use human, a robot can even be like an animal, let alone …
ytc_UgyF6bCRd…
Comment
Next time you get Neil on, have him describe why he thinks the universe is objectively knowable. That would be awesome!
I think it's bit naive to believe there is not at least some potential for AI to be a negative if not destructive force. Yes, AI will change how we do things and from one angle, for the better. However, people using it to write papers (lack of experiential learning and diminished critical thinking), job reduction, i.e. even smaller distribution of wealth, mass manipulation by bad actors. Now, what happens when an AI becomes an order of magnitude "smarter" than any human? I don't have a clue, but I can imagine a myriad of ways that it could be anywhere from catastrophic to entering a new age of thriving that is not currently imaginable. Taking a position of optimism or pessimism is a mistake. I believe this issue needs as much if not more international attention and cooperation as say nuclear armaments. Just my opinion.
youtube
AI Moral Status
2025-08-04T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyrhqqjhCJrzclwV-h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwcdFgY2WglAAehkvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw37CTH40v442OwJMp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFHtangKcgYMVZb7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgYTpfr2kdCGaLzBZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3v47MSxw5KMiapqN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1YoZRjaNunr9_5Zt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwD9RY9I7S6T58sIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvDl_kwB5ashMYHxF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2bTriMLH3IpyR_sl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]