Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI replaces white collar jobs in a service based economy that won’t look pret…
ytc_UgxtnelM3…
G
To state it simply, we loose if we're divided. I agree with Elon, the best AI is…
ytc_UgxzWV4J2…
G
This is what the back store to Dun is about man figuring out AI is not controlla…
ytc_Ugwxipv7C…
G
I think we should retain the option to have the final say if need be, but always…
rdc_gd7nluk
G
0:07 Do you have to artificially teach AI not to hate Jews? Is that your fear? 🤔…
ytc_UgyE8aO_8…
G
Well, for one, there is no mention of "Law" in this, as if AI were to act as the…
ytc_Ugxr5kx6r…
G
They'd better watch out the AI being used to fly a missile to it's own self dest…
ytc_UgzNw45KB…
G
2:40 "their own character" that's Gabriel Ultrakill 😭
(Also, yes, guys, stop en…
ytc_UgzCRum2U…
Comment
I was looking around on Google Podcasts for something about ChatGPT and I found these three guys talking about ChatGPT is like summoning a ghost. It was called "The Richest Family" and their theory was that ghosts are echoes of the past, and if we look at them that way we could also conclude that seance is a way to conjure the echoes of those who have been captured by the mentalist or channel conjuring knowledge and feelings transmitted from the past, giving them a voice.
Sound similar to ChatGPT? Their point was that when we train AI, we capture the knowledge of countless individuals into a “model” that can be conjured.
That leads to the question, Are our modern language models a form of conjuring synthetic ghosts?
They all seem like pretty smart nerds, that enjoy talking with each other. My biggest crit is that they cover so many topics, they are really interesting topics but digging in a little deeper would be my recommendation.
I think if yous search The Richest Family on Google Podcasts it pops right up.
youtube
AI Moral Status
2023-04-21T20:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwl4U-H7vBtjcRYOxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOHUqgeNN0KzgdkdF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxaFnFzMaLbhxPn0Yd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1Wclq-7kJCKOgMVV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxK9sQIFbCTYdN02L94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTjrU_xHbQpQ3iBn94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmycJz9XptR0dtedZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRJv274PxL_3C2Hzp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyInbqpygWkTlXghC14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8OUmut9IExmzKw154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]