Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interactive quizzes sound fun! Olovka helps me turn notes into quizzes too, keep…
ytc_Ugw1yZ_kl…
G
It's unfair if human beings could kill everything except themselves. AI is the b…
ytc_UgzZMYntn…
G
NO, AI does not wipe-out the working class. There‘s so much to be done that AI c…
ytc_UgzHeuZZD…
G
Something i've noticed: AI has a huge gap in knowing any short stories.
Bit of …
ytc_UgyEKyuia…
G
We appreciate your concern! While popular culture often portrays AI in a negativ…
ytr_UgyhxFyyY…
G
why would you give a robot the feeling of pain pretty stupid pain is a weekness…
ytc_Ugipm9QoH…
G
Cutting edge facial recognition software?? Couldn't it even tell that his ears w…
ytc_UgyWaRYYo…
G
@TheRealDuckofDeath "Like, I kind of suspect the latest Marvel cinema teasers we…
ytr_Ugx-agJCh…
Comment
Good thing mentioning the alignment problem with Artificial General Intelligence. I see a lot of people brushing that off because current AI systems don't pose existential threats, but we have little evidence that we can't build one that does within our lifetimes. Also, there can't be enough emphasis on how the default for these systems is misalignment; unless it is _specifically programmed to care about humans and share our values,_ an AGI won't mind if going about its goals means very bad things for humans.
Also, the alignment problem isn't just a potential future problem, it happens in all current AI systems. For ChatGPT, it just means being a pathological liar (cuz making up some bullshit that sounds plausible is frequently easier than knowing the truth), but even if an AGI that wields total power over humanity is impossible, progress in the alignment problem has benefits.
youtube
AI Moral Status
2023-08-26T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwT--5kh-XoylFh2054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZiyhGZ8dEIvx6MLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxK3L4jh7XxilDJs8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtYR1gj-Q37kdLr1N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlpOsqxdIdO9Lgy_x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwN2_IgGvgwOeWZ4R54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwVKSYPSj0LxQKm3O54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYm9BTWuiG1vtb-BR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzelfsvOoRcE3UTYqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxIA4vv04TP2oYa8Y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})