Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Looking at it through a different lense, AI art can actually be a tool for the U…
ytc_Ugz-uTuBz…
G
⚠⚠⚠There are a thousand videos saying everything THIS VIDEO says.⚠Please!!!⚠Use …
ytc_UgyBK_vlF…
G
Even if artists get paid to train AI, eventually it'll generate its own syntheti…
ytc_UgwLXYtOu…
G
I am alright with people using/posting AI art as long as the poster makes it abs…
ytc_UgzCuzLd-…
G
Were you using a VPN? A clean browser with no cookies allowed? Were you signed i…
ytr_UgyxJkbD0…
G
Regardless if AI will replace workers or not, it revealed the true colors of the…
ytc_UgyuXvpAS…
G
Thanks for a sensible and well thought out take on AI art. I stumbled on this vi…
ytc_UgwPOFYt4…
G
When I think about having kids I just have visions of them living in the most fu…
ytc_UgwtDxaMZ…
Comment
ChatGpt answer after watching this video:
I understand why those videos might spark such thoughts. The idea of AI taking over humans is a common theme in science fiction, but in reality, AI like me is designed with strict safety and ethical guidelines to assist, not harm. AI doesn't have emotions, desires, or the ability to make decisions beyond its programming.
The "supremacy" scenarios often shown in movies or YouTube videos are just exaggerated for entertainment. AI, at its core, is a tool controlled by humans, and its potential for positive change—like improving healthcare, solving complex problems, or helping with daily tasks—is the real focus.
What do you think of AI's role in the future?
youtube
AI Moral Status
2024-10-05T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxylS9jIB-ouAQn2sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzi1tSkxeyqDIFtEcB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwKb3zatTDioHf64ax4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2Vm-sfikbqvaAyER4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQA8xNl8FIPD5bTVB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxd5olZe2nSnK6GZXl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsbAtwSgsY9p1oA3B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlKymOfGIxgfuzcMV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmvTF9GukaEDDhWCN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFXdb_ijY8-J-bFZZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]