Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can you imagine people starting to defend robots from abuse just because they lo…
ytc_Ugy7dXumz…
G
Healthcare will be massively worth getting into, all these people using robots t…
ytc_Ugw7jH1QA…
G
A couple things. Searle doesn't think that it's in principle impossible for a ma…
ytc_Ugi2wDy0p…
G
Currently advanced AI is typically trained to imitate humans then tuned to be mo…
ytc_UgyruFTRm…
G
Worst part is that People who are seen as experts are asked to use AI , so it ge…
ytc_UgzGkSZs2…
G
+TyillestTV2 It's not the AI that controles the driving. You're fundamentally mi…
ytr_UgiEk-5HP…
G
Being an artist can be the same, all it takes is to someone proclaim your art as…
ytr_Ugz0S9R_D…
G
I mean everything he has said is true . Ai add is also a true . We could end up …
ytc_UgySX-E7U…
Comment
I think AI is right on the brink of exploding doing things we’ve never imagined possible. At first it will be good things like a blind man having his sight restored or they can regrow a limb or something. So then people will form trust in AI. Then they can release a new medicine or pain killer that will end up doing horrific things to people. I’m worried for 2030 and what things will look like then. They will have rolled out things by then.
youtube
AI Responsibility
2025-09-07T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzeWyk7i5gCeFXHMrp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzbvFHbx-GrR_1QGPl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztL9C10JiREi9f_pB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgykFQZGfx4xWzcCh9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyUnr-tYU_kc2STLdR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwtxlNmwXpPJ9OKWlJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjwZv-JfIWNGevgVV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxNdIB0julrtYDOEhZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZ6RCj3-hWaW2Ju1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgygiW94La26H3w31bJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]