Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the best way to think about it. Every word you say to AI Is a library :) Avenues…
ytc_Ugy4fB4S1…
G
I can't imagine the pain they feel. However, ChatGPT isn't going to cause you to…
ytc_UgxsiA3d2…
G
I agree, yet these AI elites don't care about indigenous people or people in gen…
ytr_UgzNNqMkv…
G
Advertisements are banned, users have full control to block or avoid any content…
rdc_ohf0igw
G
AI will take over. It's here and not going anywhere. Soon most of humanity will…
ytc_Ugx6b7H0Y…
G
AI. I understand complex matters by simplifying them. Thus, artificial intellige…
ytc_Ugw0qaaV3…
G
Don't call them "AI artists"
They did not create anything. They just prompted a…
ytc_UgxwyHqYG…
G
a friend of mine is making an AI thats gonna be a fully human computer just stuc…
ytc_UgxZ8SO1f…
Comment
I must be the only person who understands what could make AI work. Besides needing proper hardware, it is really the care and cleverness of the creator that will make it work they way we want it to.
In science fiction, AI gone bad is most entirely on the negligence of the creator. SkyNet anybody? You create an intelligent defense system and don't bother designing it to value life or something? Come on.
And I am getting sick of these idiots fearing the developing of AI. It's uncalled for. Any possible intelligent machine used in the military or law enforcement should be designed to protect people from criminals or any harm.
With that protocol in place, it would be impossible for a system like SkyNet to destroy most of humanity.
Going past that suggest protocol, I think machines like Data from Star Trek should have other goals to strive for (not humanity) like expanding science, exploration, and charity.
youtube
2015-08-05T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwm9I9NcRQElvQfqu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRhW6ydR3WoIlU3gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgghtrugE12abngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugic-8CdfbK863gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiiVzQEVXTO8XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgigNAG8ggHJ7HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugigkb4gWN8_I3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_4VKjBann7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugi9Gszi21MTEngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggnLXyVGHuX8XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]