Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI wont “crumble” from humans not producing art for a year, because it can just …
ytr_UgweCuw07…
G
You are right, but you are forgetting that, as other responders have suggested, …
ytc_UgxFKXGBA…
G
1. 99% unemployment? Sure.
2. AI will kill us all? Ok.
3. We live in a simulati…
ytc_UgxMMRqJ3…
G
AI needs to learn that to lie is bad and it should be able to deduce the differ…
ytc_UgxL-mhhq…
G
AI is the single worst thing to ever happen to art in the history of the medium.…
ytc_UgzEUVwZq…
G
Even if it wasn't going to ultimately be used for some sort of mass surveillance…
rdc_o562bhm
G
Prompt engineering is the true new skill- articulating precise English to the LL…
ytc_UgxABFjlG…
G
So basically, talk to AI like you finally have a private audience with Sherlock …
ytc_Ugyk3w5sl…
Comment
People have to have purpose. Without that we tends to lose our reason for being. I think AI taking people's purpose away is going to have a terrible effect on mankind. It's not that I believe our work/career are our reason for being, but I believe God created us with intelligence and unique abilities and a need to get busy and do things. Even sweat work makes me feel better. I think technology has its blessings, but it has a curse side too and AI will too. But it's let out of Pandora's box and it isn't going away. Especially as we consider how essential it is to be advancing ahead of potential enemies who would potentially use AI in a very evil way.
youtube
AI Moral Status
2025-06-04T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxyftFdJiG-Wtb-Uyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhyXYdZmIkyA4n3kR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyi7aotmTeW0hGbjFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaentiQjN-zkwW6nZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8E7LoqMKAlvsv9a94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxp2O6OE7eg5EOQ5nV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw54apVsj0EYfyaVXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzhrihmzEGQ56AbH4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyl3AIaLNpFZhAgKcl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzSo0aENwcAMC3AMg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]