Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for sharing this information! I know your show can be a trusted so…
ytc_UgxKyW2Wt…
G
Bro let them AI "artist" stay mad. They're just posers at the end of the day.
B…
ytc_UgztEdtZY…
G
"If AI Takes All Of Our Jobs... Who's Going To Buy Everything?"
Uhhh, the rich …
ytc_UgwMnJn8E…
G
Software engineer are just going to end up being AI engineers - its a change co…
ytc_Ugwde25gr…
G
You wanna know the irony of it all … every one complains about Ai, data centers,…
ytc_UgzcUBn0y…
G
So let it sink in, when automation came in, does those workers have UBI, nope, d…
ytc_UgzcyQjBw…
G
Imagine the panicked look on the faces of C-Suite executives when they fire ever…
ytc_UgwJXGJ2b…
G
True. But before Tesla added the "Supervised" to the name of their Self Driving …
ytr_Ugyqf9eMY…
Comment
@4:50 is scary, but not for the reason most people think. The idea of developing AGI "so it can be used ethically and for the benefit of all humanity" puts side to side the idea of general intelligence and ownership. AGI is exactly what it sounds like: artificial general intelligence, a thing that can think and reason on anything rather than just on the specialized task it's programmed for—the neural networks of today are built so they're good for one task. Their concern is it will develop its own goals.
They're talking about something capable of reasoning on its own existence and then forming its own will and agency.
They're talking about creating something sentient and sapient, with its own will and desires, and enslaving it.
youtube
AI Governance
2024-01-19T15:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxZGUvJvu2RsKMZ0-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRfvHalsjxXHQhgF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTUn_pek1VlAJElhR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsUgGAQynl9WCuXuN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxprUFuPVJ2jxhuVlJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD4VrQ2otFId05BF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyy5M7_YHuPSTy4Lvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzhvW4B3waBjgEZR0d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzW_uDyqRFef7pHp-x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy5xgOqcCRnrlQAVr14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]