Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They should add hardware signatures to phones and security cameras so you can te…
rdc_izlgiob
G
One of my biggest problems with AI is, that they take away responsibility that c…
ytc_UgxquTJIg…
G
It is true, LLMs cannot keep track of the entire application. There have been ma…
rdc_nbhfoff
G
Do you rekognize that the story of ai changes slowly? From: nobody needs to work…
ytc_UgysSJc5S…
G
@ImWorthless-dk1jb As I see it, I don't think they do honestly. It's not like t…
ytr_UgzcTiHO0…
G
I kinda feel weird about this image, it's just a gut feeling, but it feels sooo …
ytc_Ugxuww0_m…
G
Same MO... the blame for what happens next is projected on the solution they've …
ytc_UgyyyIRBq…
G
AGI was defined as "AI that generates $100 billion in revenue for OpenAI" Money,…
ytc_UgzBgAwfn…
Comment
if we make true ai and they are sentient they should have all the same rights and any human and thus turning them off would be at best assault and at worse murder
youtube
2015-08-06T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwm9I9NcRQElvQfqu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRhW6ydR3WoIlU3gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgghtrugE12abngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugic-8CdfbK863gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiiVzQEVXTO8XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgigNAG8ggHJ7HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugigkb4gWN8_I3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_4VKjBann7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugi9Gszi21MTEngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggnLXyVGHuX8XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]