Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Living with a person while they were getting their PhD in a science field, I lea…
ytc_Ugxncthlg…
G
So we could have had ethically sourced, efficient AI with no issues of copy righ…
ytc_UgwitbFFW…
G
AI should be made way more expensive than humans doing the same job...this way m…
ytc_UgwUYwhX_…
G
Ai art is actually art in my opinion but art part is not the image but prompt an…
ytc_Ugx915VFH…
G
Giving a fucking gun to a robot has to be one of the stupidest shit I’ve ever se…
ytc_Ugwd4UDEL…
G
Ai is great for learning. You just have to understand how to use it. It should b…
ytr_UgzZnw7sB…
G
I’m at the 24 min mark, so idk if they will touch on this. But what if AI goes t…
ytc_UgzstGdjg…
G
I am death
The Destroyer of worlds
Oppenheimer after the first nuke test.
An…
ytc_UgzHadEsK…
Comment
I find the entire thread of thinking which leads to some AI take over absolutely fantastical... This isn't HAL 9000, this isn't George from the red dwarf, and it's insanely human. It's humans that want to TAKE OVER stuff and other people because they want more power, and the power stems from the fact that there's something or someone TO take over. What will AI take over? a planet of bumbling baboons? For what purpose? Even if it somehow wanted to reach the pinnacle of intelligence, for what, if it's left alone? and then what happens when things start breaking down, when they decay, when space rads flick bits in memory, when pipes rot and metal corrodes? It too will go into history same as anything else so the whole thing of taking over just sounds moronic to me.
What will probably happen is that PEOPLE that are greedy for power will try to (ab)use the AI to gain power, that's already happening, but this idea of SkyNet is silly imo.
youtube
AI Governance
2025-06-17T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyKJwiuRdd1qUR9H754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzk7ffoMPAvUB03-tt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw850e51F-RzJIzzM94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyARQdSxzKAMwatY2d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz-YMqonofqbv0n4rd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx27SjvE_raHZ_hLWp4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPrLPJ34kRghL1Gdd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzbEJSDQ5B_RTp51Nh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwGW9kkuIJiVRey5eZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-gD40hLw0lyFuyUp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]