Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The working class will put THEMSELVES out of a job by developing Artificial Inte…
ytc_UgzhpGWB0…
G
“AI is destroying jobs” gets a lot more clicks than “the labor market is adaptin…
ytc_UgyOwtOC0…
G
Some people just take issue with working directly on military applications. But …
rdc_dwvi0x3
G
A.I. is consuming all our water and electricity and we get to pay for it. The E…
ytc_UgzCbaXRp…
G
A robot cooking? I NEED to see THAT video. "Talk is cheap" especially from video…
ytc_UgwTeqInW…
G
These words used to be a part of my vernacular BEFORE ai. Stopped using them bec…
ytc_UgzEfHaB9…
G
Watching these a.I robot videos these things kno how to fight an what ive seen i…
ytc_Ugz3nKcJw…
G
Thanks for the love! ❤️ Sophia really brings a unique perspective on wisdom and …
ytr_UgzfoAXVu…
Comment
The thing about superintelligence is, by definition it needs to be smarter than any human. If we train it on human-generated data, or synthetic data that is produced by algorithms designed by humans, how can we possibly train it to be smarter than us?
There seems to be this assumption that if we make a "general" intelligence, aka an AI model as smart as any human and just as capable, then it will naturally follow that the general AIs will be able to figure out how to make a super AI.
But...... why do AI companies and researchers have that assumption? If we humans can't figure it out, there's no reason to believe something no more intelligent than we are will be able to.
And if they do figure out how to make a Super AI, at best it will be controlled by the General AIs, at worst it will not be controlled at all, but either way, it won't be controlled by humans. And we think it's a good idea to be striving towards this goal because.......?
youtube
AI Moral Status
2025-10-31T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoYZLwz1hvNcmWdih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqEcV4Qs5OkZ4AFgN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzqRekSJOzVfIBImfh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzeFkkpaR4Jdj5J5J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxMQgb3wFL9aJnLrj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9NqqZ5u5z9bOVc754AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw4lYL_D-jVZDsPA9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhN7AlDS6bIJ4PAGh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydiU7eVhVJv35V0xF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweoqkAkh4nIO_Iwwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]