Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
your channel is great but you failed to give me a single reason to say thank cha…
ytc_UgwfVQKzv…
G
Im a first year college student and my main professor uses ai to teach us. He ha…
ytc_Ugz8Vbfyo…
G
@MrGrantGregory Idk! 🤣🤷🏽♂️ i mean, it is a good edit! But normally you could se…
ytr_UgzZL8orF…
G
I really don’t know much about this subject but i constantly fluctuate between a…
ytc_UgwYPnkgN…
G
Because Ai image generation has no context for the images it generates. It's tra…
ytr_UgwaxO5gD…
G
8:04: To be fair any statement by Shad in regards to AI art is kinda....not all …
ytc_Ugy8o0jdx…
G
This CGI is getting out of hand lol. I definitely thought this was real. 😂
Howev…
ytc_UgxGMlRDj…
G
No moral compass he states -- does he personally know Musk ? because when prompt…
ytc_UgwdWthEk…
Comment
The biggest problem in robotics is that they are perfect, unlike humans, they dont do mistakes and they dont age. Building a self-improving AI that would have infinite time to improve itself wouldnt mean the end of humanity if the AI had some rules it has to follow, for example to protect humans from any damage.
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggMYT3QVEugTngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg4EttFwJ0C_HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggQic20SC1MG3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugh9ZDxiKzDTDXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgheLFoKvgFErngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgggD4wUkJmJlngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghWccnEejCDEngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UggmC4suz5PNg3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugi8KLtUpuUXmngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugh9GeRqG8Yl1HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]