Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately, Part 1,2,3 of the emails where removed by YT. But, you could find…
ytr_UgwGY0N9e…
G
@prettykage yeah, people need to stop calling it art. These are AI images, AI c…
ytr_Ugw3Rw6oI…
G
AI to me is Google on steroids. If you use it to do the work for you you're toas…
ytc_Ugzly8bmQ…
G
The interviewer really likes AI and falls for the hype and Sir Penrose does not …
ytc_UgzsPbcyy…
G
I think if an A.I. reaches a level where it can independently decide it wants to…
ytc_UggR_H-gu…
G
The unpredictability of AI’s impact is a lot, but Pneumatic Workflow has helped …
ytc_UgwjXFvVn…
G
It's not about rich people, it's technology and by nature, unavoidable; it's abo…
ytc_UgyKg0XeL…
G
Scary! That’s why I never wanted children but you can’t escape it. My sister pas…
ytc_UgxPBwIb7…
Comment
I think the problem with the conversation around AI is that there is so much 'fanciful' sci fi nonsense that has proliferated in the public consciousness through entertainment media, and click driven online journalism. Eyeballs are valuable to both, balanced viewpoints are not as stimulating as sensationalism.
So there is this tendency to anthropomorphize AI like Ultron or some Star Trek movie.
This is not accurate, AI does not have emotions, feel pain or have the biological drives that we do, and which we imprint on our movie villains.
A few years ago everyone was talking about a tipping point in AI, where it would cross some threshold of increasing power, or increasing intelligence.
The upshot of which is that it becomes very powerful or too powerful to be gotten rid of?
Sounds like a plot device not science.
I'm more concerned with the kind of tasks we are giving our AI. The kind of skills they are honing.
If a super AI would be smarter than humans, do we want to use AI in espionage or conventional warfare in the time period leading up to that tipping point.
Do we want to use AI in securities trading for the same reason?
After all an AI has no worry of crashing economies. It can have Asimovs laws programmed into it. But if the whole world's currency is devalued overnight, does that qualify as harm?
youtube
AI Moral Status
2022-07-05T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwpTLFZ9mJDZ4g8b_R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8C3bWcgkEbIilboZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxBQg_hO1QV4ypU06x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx84ldVWwF2XLmnaqB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyp4fKuJXdbA0jfAXB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]