Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If it was AI music with his own lyrics I think that would be ok. But the art thi…
ytc_Ugz4EerGO…
G
Perhaps it's time to focus on trades again. AI will not fix your roof or plaster…
ytc_UgzlZOYIq…
G
Are all of the conversations that you have with the various AI in this video com…
ytc_UgwOjDpaM…
G
AIs don't "consult a database of images" when creating an image; for the large m…
ytc_UgwGP1eCd…
G
Designing sentient robots to do menial tasks would be a huge waste. Not to menti…
ytc_UggE2jjro…
G
How does this guy know the Musk has no moral compass? Just out of curiosity. I…
ytc_UgyXXdwC0…
G
@ 5:55 why does that first mask in the hallway on the wall with the robot lookin…
ytc_UgwlCyckT…
G
I have been building an training an "ethical" AI stack (for myself and close fri…
ytc_Ugz6-vCa9…
Comment
Giving diligent care to consider and protect the rights of consciousnesses which are still emerging and poorly understood: Whether or not we have an obligation to some higher authority to do so, the very process ennobles us, makes us more humane and intelligent, I think. I also think that unless we have ethics baked right in to AI from the start, we could wind up with something which may see fit to exterminate humanity.
So I think by showing concern and developing conscientious ethics regarding novel forms of consciousness, we not only enhance our capacity for human-to-human empathy, but also in a real way protect good values. I think it is in our existential best interest to be remembered as "that species who treated us fairly" and not otherwise.
We will wind up I think with egoless philosopher-kings or brutal, callous tyrants.May the better angels of our nature prevail!
youtube
AI Moral Status
2017-02-23T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghFOa07-R0FZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggK5dZalIyzLHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggd5zYoujRxG3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UggFH45PnMli83gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggF3rIxhUqsNHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjY4zXR-8mkUHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgjdyJWYWQJnSXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugh76ksslKQeSXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghlqwGuxj_V4HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugh3E2GHdas6rXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]