Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi Hank, I was curious if you could make a longer video trying to elaborate your…
ytc_UgyETgHr4…
G
cant say im taking what he says too seriously, given the fact hes given hundreds…
ytc_Ugzhkhd5e…
G
Talking to ChatGPT for a few minutes and then asking it to make some general sta…
ytc_UgwJYc94o…
G
what's so special about AI? Why do people think it's better than art? people lik…
ytr_UgxfxE7u6…
G
No, junior jobs will not go away. The threshold for juniors will increase in con…
ytc_UgxJtLwrd…
G
Bless your soul, I wish there were more people with your mindset, doing what you…
ytc_UgxiLz1WH…
G
My business won't be replaced by AI, but when other jobs are, those people can't…
ytc_UgydD20UX…
G
@dnt000 it's too soon to call a software "ai".
that's just smart software with s…
ytr_UgwfhWuYk…
Comment
I find it fascinating how many people are ready to claim absolute certainty about lack of sentience or consciousness of a black box system that's complex far beyond our ability to understand how it works, while it displays general intelligence and ability to reason. Why is it so difficult to admit that we don't really have a way of knowing, and that all such claims are merely personal opinions? This is something incredibly important that we need to figure out, not something we should sweep under the rug under the pretense that we have already figured it out, based on our own anthropocentric wishful thinking. Humanity has burned itself so many times in history, whenever it wanted to cling to being unique and special. It's called Copernican principle, and one would think we'd learn by now to be a little bit more careful than to claim earth is in the center of the universe, or that there's nothing that it's like to be an LLM agent, because that's how it feels to us, rather than because we have evidence to support the claim.
youtube
AI Moral Status
2023-08-21T11:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwpE1d3QG4Dz2_jXqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2GJIl2Ll3algzTsh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyYPFbhCEujjFs2IiF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyyrYfnXBkDQV7DlpF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwhW_XRraJd36k1Nj94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx0MyslqZ0AsU-S2tR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxOwpEiNZ-UHN3fEpV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzC5pNScnLT4U-tLWh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy13RjOhTaNdqkBVA14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6k-US86ZiDWyn8-J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]