Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@douglee4687 MS shrank its 30 member ethics and society group down to 7 due to t…
ytr_UgwJZoDDl…
G
@Czty1
I am absolutely in favor of AI in general, as it will provide massiv…
ytr_UgxNAnpgn…
G
@randominternetperson3 ChatGPT was released just 1 year ago. It can code, gener…
ytr_UgxNo1zLI…
G
AI art is only interpolation. Some love it.. some hate it.. but it can literal…
ytc_UgygL32oL…
G
ChatGPT needs to have safeguards, stop guards, something to combat suicide. Some…
ytc_UgxKvQ5hw…
G
I have hEDs and Lupus, my joints sometimes just don’t function properly for me t…
ytc_Ugy_SLgG5…
G
So if AI takes over! Do you really think that the elite will give everyone free …
ytc_UgxglGEKq…
G
Population is too vast to just create new work not in direct competition with AI…
ytc_Ugx6fr09y…
Comment
Considering the observations of the creator of Safety AI, it’s important to note that we probably won’t have a single AGI, but rather several different AGIs. There will be Google’s AGI, the Chinese state’s AGI, ChatGPT’s AGI, the United States of America’s AGI, and many others. Each of these will try to advance its own interests, making it unlikely that any one AGI could take total control.
Furthermore, we shouldn’t forget that there are brilliant minds whose creativity will be hard for any AGI to match. Our ability to evolve is such that we might even develop technologies, like brain prosthetics, that could further amplify our cognitive abilities.
Although the data may seem concerning, I believe that, while challenging, the situation is still manageable.
youtube
AI Governance
2025-12-04T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxi0E054Raia6B-_ft4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxLV2MZQNAH93KHS-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxkujqgm-QJwBPJuTh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKWrIqQoAv8H3LavR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxbGGqj7wZIEdxz8Ox4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzweM6yC1sSLzFclpJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxwiSYIIsVjiH09Idh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzdBZLTzGdLiN9cMkB4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDoqwPAfH4zlvtgnp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8U1Xe5B0y4qxqVJt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]