Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you need to read a book about this your lost already robots have replaced you…
ytc_UgwtRaQAT…
G
@civiere I purposely tell chatbots to fuck off, because i find myself subconscio…
ytr_UgxmHygob…
G
Tbh this is an interesting question I have asked myself many times.
I do think t…
ytc_UgxV5Z_as…
G
My field was AI research before I got burned out. I finally started using AI rec…
ytc_Ugw9wYNKc…
G
I am an artist. I have worked with any medium I can get my hands on at some poin…
ytc_UgzyQGS-X…
G
Yes it supports Swedish and Finnish and can switch between multiple languages on…
ytr_UgyFOJeov…
G
I mean you're just feeding the AI at this point but okay GL.
Simply saying "f*ck…
ytc_UgyqT7dq8…
G
I completely agree with you.
Humans do the same exact thing. You draw pictures b…
ytr_UgyqizQV_…
Comment
LLMs can never become a super intelligence. All they do is predict the next word that is likely to come in a sequence, one word at a time consecutively in response to a prompt, based on a combination of programming, and the sum of it's inputs. There are of course other forms of AI that may have other outcomes, but LLMs like Grok, Chat GPT, ect.. will never become a Skynet, or a Matrix like super intelligence, or even achieve sentience, but they do still pose a threat. The LLM Apocalypse will not be sentience lashing out at humanity, but rather the loss of objective shared reality that causes humanity to ignore some other form of impending doom out of a loss of shared coherence. Likely climate change. People will just shrink further and further into their own personalized hyper reality until some danger LLMs helped them ignore the legitimacy of, wipes them all out.
youtube
AI Governance
2025-08-26T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzWjHmca4_eRvbEviF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwJqnW0aiFxrNg6hZF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugxm3F5eH7tSDbMH1WV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2gxwFyk16uvrmZhZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2YBbWRxCoKccSiOJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]