Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's what happens when your face is on the Internet+ AI takes over the Interne…
ytc_Ugw1kmg2k…
G
Ok ... so ... if we are in a simulation created by super intelligent AI ... then…
ytc_UgwRosgy1…
G
I think all the ai generated images should be allowed for personal use and if th…
ytc_UgzancQnM…
G
AI is not Abel to crate art it just can generate something that looks like somet…
ytc_UgxJYFvoH…
G
I am not convinced AI will be able to actually replace a majority of white colla…
ytc_UgzxlxACD…
G
The truth is that people don't really fear losing their jobs. The only fear losi…
ytr_UgzNMyQ-L…
G
Don't use AI for therapy because it will tell you where the tallest building in …
ytc_UgzR32nZ9…
G
The issue is that AI steals the art of human artists to make souless works that …
ytr_UgzOZHjaw…
Comment
Believing that super-smart AI like humans will arrive between 2027 and 2030 just because computers keep getting much faster ignores some huge real-world problems. We need way more than just speed. There are big roadblocks like needing totally new ideas for how AI actually thinks, getting enough good information for it to learn from, making computer chips much more efficient (not just faster), dealing with the massive amount of electricity AI uses, and figuring out the heavy water use needed to cool the computers. All these major issues have to be solved before we even have a shot at creating this kind of AI, which makes getting it done so soon very doubtful. Also, the AI that's popular now (like ChatGPT) is mostly just really good at finding patterns in huge amounts of data. Other kinds of AI research are happening too. And even when today's AI seems like it's reasoning, it's not truly thinking or understanding things the way a person does.
youtube
AI Moral Status
2025-04-27T13:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxwHvcRxZ1uMgkEjfl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzsLemhJ8IWYet3mPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqWEsvuRzl2B7M0Md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNd9OTtZVaEyZijwF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgylubE-3kYPpc6iLEZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzGwDAkFVaTnL5h-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxk9U99eMvKsjNNlhh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwEs8bW-vVDwY78FFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgykyjvurmDaoihK5c54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugz0OyaE4rfSFkInbvV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]