Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Same, I feel soooo disapointed when I find out that a good painting is actually …
ytc_UgzCXGY_W…
G
@heavyweaponsguy6284 your argument does not hold up the reason people become me…
ytr_UgxKQE_b_…
G
After I first became aware of AI, I did some reading about it. I feel it is the…
ytc_Ugx6YmYgP…
G
AI would need to be taught / develop empathy and love. Without that, there is n…
ytc_UgzlbK2Wi…
G
The first 2 images really reveal how lifeless ai art is compared to human art…
ytc_Ugyiqf0um…
G
Automated trucks would probably be best utilized on highways. Once they enter de…
ytc_UggGeObDK…
G
I wouldn’t worry about people who hate AI .
A bee doesn’t waste time explaining…
ytc_UgxbEBKKp…
G
unfortunately it's not that much smarter than the latest claude and grok modelr …
ytc_UgzT26qSB…
Comment
There are some great books on AI out there, and one that I read recently is "The Coming Wave" by the founder of Anthropic. He provides some examples of how a super-intelligent AI could "get out" and absolutely wreak havoc on humanity. It's scary because of how possible it all is, and how close we're getting to AGI, and eventually super-intelligent AGI. We will be ants compared to it, and there's no way to know what kinds of ways it will trick AI researchers regardless of what they do. Suleiman gives some examples of how a super-intelligent AI could start making those who first create it, INSANE sums of money..
He uses the Amazon M-Turk system and how it would start doing basic, low-paying tasks, with thousands of accounts, but completing them so quickly that it would amass a fortune in weeks. Then how it could use that money to expand into other investments, and those people could reach the point where they control the world, including governments and the economy. Check that book out if this is something that interests you.
youtube
2024-06-29T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzVMpoQTwl77oyyzK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyVTzGqDVa6Gocdp_N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTmXflsrZvOqsydQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqBv7kY4-LnkdKFu94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2XUIFHC_UVxXR1GZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8ctBEM7ir0D9WzlV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxXtSwI8t76z5xC7jJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz7u0pBS5mp3_3BOZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8HzK8h1vc-HEe2Ul4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAJQUv7UPQjENtRep4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]