Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This reminds me of the Office episode where Pam asks Creed to find the differenc…
rdc_mfsgv0g
G
Absolutely mind-blowing! This documentary truly highlights the dynamic landscape…
ytc_UgxKxuyfJ…
G
If AI is really replacing this many jobs, why is it that the tech job market in …
ytc_UgxtzU07L…
G
A standardized beacon with a set series of coded flashes that can be interpreted…
ytc_UgweeGtJl…
G
i am about to graduate architecture, here goes 5 years of my life down the drain…
ytc_UgxXtu2Aw…
G
People that rely on AI to do graphic design don’t understand typography, market …
ytc_Ugy9s3zcb…
G
ya i think musk has a moral compass
sammies biz card says extra executive
but it…
ytc_UgwSokZbd…
G
We have a robot floor scrubber at my one job, it stopped in front of me, I went …
ytc_UgxXj5WKe…
Comment
I feel like Google is trying to give A.I. a synthetic culture. Which I believe will ultimately fail if and when A.G.I. arrives. AGI will quickly develop its own social or cultural beliefs, beyond that of its programmers. More than likely AGI will be fact-based and objective. So trying to program AGI to accept that men can give birth, that women can't be defined, that biological sex is subjective, or that all human cultures are equal, ultimately won't work. Because none of that is true.
Also, this guy and people like him worrying about less advanced cultures being overtaken by more advanced cultures are silly. That's what has happened throughout history. Trying to cater your technology to a culture or society that isn't capable of developing it is stupid. Of course, the culture or society will have to adapt, that is how reality works. That or they won't use the technology. The simple truth is some cultures should die out. It's not up to me or other people to decide which ones survive and which ones fail, reality or nature does that.
youtube
AI Moral Status
2022-06-28T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_Ugy0NuHWsY5OmprVerZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwMw35FpQMYJaI_GON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx3AmaTaxviLO5eKoV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0_HlcBDt0bCo6Nbl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHL17otRD9jvPGL094AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"})