Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are opening Pandora’s box with creating artificial intelligence. We need to b…
ytc_UgxtAYKLc…
G
What I love about this is: The ROBOT aims exactly like an AMERICAN, brilliant si…
ytc_UgyEWlF-i…
G
This is the guy that laid the foundation of Ai I believe back in the 70s. He has…
ytc_UgwX3scK6…
G
We'll get to know AI has gained consciousness in the same way a fly gets to know…
ytc_UgxEQYc1U…
G
The only time I've generated ai art is after legitimate HOURS looking for the ex…
ytc_UgzEx_nUg…
G
Is this why AI appears to hate me and is so rude all the time?…
ytc_Ugy6T85uI…
G
I keep saying that an AI Uprising isn't going to be caused by Sentient AI thinki…
ytc_UgxHBpveM…
G
there was a channel i used to watch called chill guy explains, he had a ton of q…
ytc_UgwOas4OI…
Comment
The 99% to 1% analogy doesn’t make sense. If there’s a 1% chance of dying in a car which there basically is and everyone still gets in a car lol. Nothing is risk free. I think the biggest issue here is nobody can quantify what super intelligence even is bc the definition of it is beyond human intelligence. Ergo how is anyone supposed to create a paper defending themselves and or agi against a theoretical outcome that is beyond our comprehension. I get on his terms we’re talking about control of ai and how one would be able to control that but i think what isn’t articulated or understood enough is 1. How this thing works bc even the scientists don’t understand. I don’t think there’s been an invention in history that nobody can explain or understand how it works. That alone should pause these projects dead in their tracks until someone can at least articulate that portion of this which would curb super intelligence in general until there’s a better understanding of large language models and agi agents.
youtube
AI Governance
2025-09-06T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxIlBvJqBMWtJCXAZx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxA4vG-8qFHXlE0Kyd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzoZQXlVp7yvlJM1yt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0C40KlW32km93OD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9zz4yehjdNC4sl3l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz5SvSiTgZzhzeht0p4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHzmczQ8WOZwPBdOp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgynSzu8WYrwjo8LqiV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZvm09EBwjLWK2bwh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy5GmHB8PDEW-7BRL54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]