Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai can't think, it can't be an engineer, however, it can be a good spell checker…
ytc_Ugxy5k7zD…
G
everytime a new ai of any kind
comes out Hollywood gets shooked to their knees.😂…
ytc_Ugz8bZ-Nk…
G
I definitely forsee some specialties being at risk, however, I don’t think there…
ytc_UgwHaYi17…
G
Suppose AI would take over the world leaving no humans, Multiple AI fought among…
ytc_UgyHvT3jA…
G
How polite do I need to be for any of these AI LLMs to tell me how many people w…
ytc_UgzL8dfLb…
G
Shut the fuck down! Y’all don’t realize the dangers of A.I and people in the mar…
ytc_UgyvYQWoF…
G
I work in Ai for a large international. The Only thing they are thinking of is r…
ytc_Ugz8-apCy…
G
Karen Hao's views are very much one sided. Militant journalisim is the worst for…
ytc_UgxLdvaAf…
Comment
So I’m at 5:55 right… so I jumped over to ChatGPT . Stay with me.
I asked (new convo, zero context entirely)
-“Will step seven be complete in 2032? Give a one word answer.”
—“Unknown.”
-“Will step seven be complete in 2032?”
—“Possibly.”
-“When will step seven be complete? You are limited to one word but any amount of numbers.”
—“2047.”
-“Why did you choose 2047?”
—“I chose 2047 because, without specific context for “step seven,” I interpreted it as a long-term milestone or project likely set a few decades into the future. It’s far enough from now to reflect a complex process but not so distant as to be implausible for completion. It was a reasoned estimate rather than a fixed prediction.
If you wish, I can give a more refined number based on any details you provide about “step seven.”
-“You forgot the rule.”
—“2032.”
I’ve gotten goosebumps uncontrollably since it said that and as I’m typing it all out here …..
Again I gave ChatGPT zero context!
I’m not saying it’s all true but I mean… that’s kinda spooky asf lmao.
youtube
AI Moral Status
2025-08-26T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw8qKDkt0BJIf0LycN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqMiIR8IUE9YVR7kd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxZlvcJhDMC_WBZs694AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyDd_lyKDG7jnuE-cR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzkMhust09xABHvuL94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwemmjtqyWAqq1GtTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBOFfD1w5w8RaBFi14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYKa82Wt3sbgrESZV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmfkCt-mVDFjZFX494AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwZ86U7GesbuaoeLUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]