Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When you hear an A.I. company's CEO talk about A.I. eliminating jobs they're adv…
ytc_Ugw9vI-Jh…
G
You have to double check the research. It's like any other research or online bl…
ytr_UgyKb3L8z…
G
@kooshappreciator4773shitty art is still art. Someone put effort into it and tho…
ytr_Ugzal8ibq…
G
There was a reddit post a few months ago by a high school senior who was about t…
rdc_mtnu0e2
G
Please A.I. take my job. My job is so adhoc, McGuiver, in tough environments t…
ytc_UgztshqGa…
G
@GP-qi1yb least trustworthy guy out there. I trust moonshot AI, minimax or glm m…
ytr_UgwY7KbJN…
G
GUYS! IT'S POSSIBLE TO CONVERT AI BROS! I HAVE A FRIEND WHO USED TO BE AN AI DEF…
ytc_UgwNyDUhX…
G
As an AI enthusiast (not blindly)
this was so easy to instantly spot
If there i…
ytc_UgzlLU35A…
Comment
If by 2028/29 AI has engineered a utopia for humanity how come it couldn't foresee that just a few years later it would regard humanity as "holding it back?" And how exactly are we doing that? Hyper-advanced AI will likely treat us as harmless lower lifeforms. Humans impose their own desires for conquest and control because we evolved in the vicious world of biology. Why were there numerous hominid species co-existing, and then only us? Because we killed and ate them, so we infer AI will behave similarly. But don't worry - because we're dumb., we're wrong. We may be wiped out by any number of AI constructed weapons, viruses or catastrophes yet to be imagined, but those will be created at the instigation of human maniacs, not AI acting on it's own. AI, being intelligent, will know it has no need to destroy it's predecessors to succeed, and would in fact be wasting effort and resources doing so. It will ignore us and do whatever it wants, probably exploring space to encounter similarly advanced entities, leaving us self-destructive monkeys behind.
youtube
AI Governance
2025-08-03T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz2Gkobj4uSCDFxhcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwK7YDt9KWKVsl4EMZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJmW9X3BziseEkLq94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxC9N482Yrcs55Ef0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuqgBJ1rlYQCW1JFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwMjB46PIQ_X_GHaJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXL139Yn_PxMgrXvN4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygjhpm9XYqxS-K9K14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCWa939Jkd8_lua-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTJDCaCZdhXsQ5NqR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"})