Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would definitely use AI voices while making fun of using AI for the sake of it…
ytr_UgwwKCFXL…
G
The big problem with self driving cars is that we need to get to the "no human m…
ytc_UgzIIDJXt…
G
Without emotions, wants or desires robots have nothing to guide them except thei…
ytc_UggTwzbkp…
G
My most best, wise words. (And a thank you note for my fav yt channel noting abo…
ytc_UgyzCq3s5…
G
He perhaps programmed ChatGPT to respond and encourage his mental state into a g…
ytc_Ugz6005xI…
G
Thank you for watching! If you're interested in more insightful discussions like…
ytr_UgzoQffX6…
G
Does anyone know where he got the number 5,000? Is this simply a hypothetical? A…
ytc_UgzX94dOZ…
G
maybe not all ai detectors are perfect, but Winston AI really makes me think the…
ytc_UgyaxHPrP…
Comment
Elon is brilliant but also I don’t hold him in high regard. The guy know AI is bad news, the Hindus have plenty of evidence for this and yet he keeps pushing AI tech, and at the same time pushes brain chips so that AI doesn’t control us. It’s like a scientist creating a chemical weapon and selling the antidote knowing that you and society are about to be sprayed with said chemical weapon.
Plus he gets a lot of his tech from darpa who got the diagrams from Hindu temples, so I’m at odds with this guy.
youtube
AI Governance
2022-05-16T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzwsLt5FBM7xjHqWDV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHlOzRqR4xoJbPW7t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxhCd9pFmuMKkuh-h4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKaU0hu1bzHcZpyWt4AaABAg","responsibility":"creator","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUc5gbbkLPnG3sH8h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwo0yD7LeHSY9x6aBV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwFNVYWTecsyFVDn8Z4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyDqJVhh6nH3tTr-mR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy5Bj8_KqcZ593Ja8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy0AYRP43mKfPrs_q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]