Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI continues on this path umm, unregulated, the only jobs left will be AI mai…
ytc_Ugz7tfgXv…
G
ART, MAKING MOVIES or VIDEOS , IF OU ARE USNG AI THEN IT IS NOT AN ORIGNAL…
ytc_Ugw_2rlCH…
G
I tried out compose for my recent side project, and I thought asking gpt if it k…
ytc_UgzpX9zlf…
G
While sometimes true, it's not always true. Not all ppl who use Ai make garbage …
ytr_UgxyFJ5d_…
G
AI "artists" are effectively just in a sense, commissioning the AI to make art f…
ytc_UgxYI7GGY…
G
People who are scared are scared not because their art is trash, but because the…
ytr_UgwnPBiZJ…
G
In Dune (books), AI made humanity degenerate so much they had to forbid making a…
ytc_UgwQYhGFc…
G
You don't need to be an expert. The media has been doing this long before AI. Us…
ytc_Ugx5-AKep…
Comment
What most of these doomsday scenarios seem to miss is what would be the AIs motivation. They seem to apply human compulsion to a computer. It will not be the same -- machines don't have the same needs or wants so its impossible to predict a future with AI.
youtube
AI Moral Status
2025-06-27T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyg10bTuFC7osW6pwZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgymieBgfWROVpC0DGt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzsYsTVUzz0D9sjTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzNQxJJWTYJTy8pEdB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxt5Bt6sHT68gAES0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxDTP0DDmkIv7K9MHp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxfDnPM8xmGvcg1aaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxONx_4BOXD2bM9wfN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxioKzRzykGqjRa0Ox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXs-5V1905ubVJ_Qt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]