Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Majorsoft : Meet our new AI, we tell it how to think, feel, and say,...BUT we st…
ytc_UgzneVq8t…
G
This is about making the average person either slaves or turned into fertilizer.…
ytc_Ugx18NzIF…
G
It's not the ai as the ai only lead to false arrest of black people. The logic i…
ytr_UgxEW6ov-…
G
It is Microsoft just as well but yeah all part of the terrible generations long …
ytr_Ugxj_yZae…
G
AI can do anything any human can do at least in the next 10 years. The sad truth…
ytc_UgyUCR_Ry…
G
Yeahhhhh that’s why I’m going to community college and not touching AI. I’m luck…
ytc_UgwWsJ4Aw…
G
Ai will never be the Problem but people like trump using it as a weapon one day …
ytc_UgwLLOY_A…
G
The LLM fail (not AI, AI still doesn’t exists and there’s no technology today th…
ytc_Ugz-2fdlB…
Comment
As much as I appreciate Tucker, Elon Musk is one of the most useless, pointless interviewees he could possibly bring on. His knowledge about this topic is very clearly derived entirely from popular science fiction and not based in reality. There are plenty of interviews of actual experts in this field explaining that the danger of this technology lies with who controls the code, not the code itself. Large language models(ChatGPT etc) are little more than a sophisticated pattern-solver. You give it a bunch of stuff and it predicts what the next thing might be. Nobody alive today will still be alive by the time we reach a point where we have actual sci-fi-style AI, *IF* we reach that point.
youtube
AI Governance
2023-04-18T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWvOqVKZ4OJNup9V54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwnX6vRI59sJ2oRzLh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzq5Wer5zUOWHfR70B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNMlQ7suF4BAhil5R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxncZozV-BiNHc0Q5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgypCKe0Btvdi1qlg1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxk7hrdbXxe3X2oRTF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzphC8V-Id0gpCcwIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4DXbFTRABC4vD3rl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7mhN1bOeKGJqQTut4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]