Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@basedonwhom The thing is, the filter is another AI. It's not just a database of…
ytr_Ugya3LyJT…
G
No offence intended but your analysis makes no sense.
If Ai can take the job of …
ytc_Ugxb-j3pj…
G
I have to write for class. Now here’s what can happen:
1. If I write something t…
ytc_UgyAU6ch_…
G
So what's the reason again? "you've restricted what it can do"? What kind of rea…
ytc_UgyOpYoRE…
G
What I would recommend as of now, would be to every digital artist (and even tra…
ytr_Ugwb32gk_…
G
Why should you start a business building products for other people when those ot…
ytr_UgwzLlXz_…
G
So is AI going to take over all jobs before or after my talk to text works prope…
ytc_UgzjBoCn9…
G
Humans are weak because we are driven by fear rather than logic. If this ends up…
ytc_UgyZ3qAi5…
Comment
Much like a child growing up, what they are shown and told either creates or warps the thoughts in their minds. The more positive information that is received by the start of the programming, and is continually prolonged, then we may have created some kind of.. Real, feeling, entity. Not organically alive but.. In terms of information and emotion? I'd say the soul of a robot is only as strong as the one you give it.
youtube
AI Moral Status
2023-10-18T23:5…
♥ 47
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaO-a1pb4Ifg4OHtF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6ycqi7Klm8gjDBst4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP3cO0zvQwg_-Zy_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfktJPEXW1c2QDw9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOAfDnitRVOCxT7KB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgygQvktmV-LFmqloKR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP_Yd9hbzNSDWk4Y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySXBtDPEAQYuqjRlN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwV0BSzoZ8tM1HTu894AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg6Mfa6zrKCtNp5g54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]