Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too bad you didn't have it before the election, maybe you could have made Kamala…
ytc_UgyKEG7av…
G
My friend lost his job to ai and the worst thing is what he just got that job an…
ytc_UgyIQmdAk…
G
What's immoral if you're poor and marginalized, but cool if you're rich/a compan…
ytc_UgxwPwn2y…
G
I don't think it really articulates how much tech like this would change interac…
rdc_mo9szzx
G
You making this video in a way that doesn't condem AI and then asking if "we're …
ytc_Ugxyl2nmB…
G
from a purely scientific standpoint, what’s occurring across platforms today isn…
ytc_UgxC6WRjy…
G
Everyone's calling for A.I. safety, but where are the calls for A.I. maximallism…
ytc_Ugw-P7ZFa…
G
We’re only ten years away from Will Smith chasing a Tesla robot with a gun in th…
ytc_Ugw8fKU79…
Comment
I don't understand how us as humans feel the need to create an artificial us without the violence and negative part of humans. In essence we are that artificial intelligence robot. If we want the world to be a better place we simply need to apply ourselves not recreate ourselves that would be insane. We know the answers to our planets salvation we know what we need to do to create would peace we don't need a robot to do something we can already do. We cannot let the created become the creator because it will only lead to our destruction as the world is now unless we as humans change. Some people are right the robot only knows what we give it and is harmless at the moment but when it gets to know a human and what we are it will see that we are our own destruction. We create pollution create war create hate drugs manipulation and greed. AI will change the world but not in the way you so called scientist think. Even I can see that and I don't need to be the smartest man in the world to know it.
youtube
AI Moral Status
2018-05-25T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugxi5VE690WoGpqEjmZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxgY4-DBDS5TL9AhBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzy0ziiJeb47myq6wR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9M-Bt1nxswZiwCA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwMKrZUwQra6lhI04N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPe6MP4F2HucwRAHN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyWWGh4WMlAYgo11N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyDfnfE3BX9cz8yHM94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz4ioWJJ7DBWs5kGLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-UykvSAtbWsZ1wSh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})