Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5 minutes in...the lies, the cons, the more better, less cost, more profit....su…
ytc_UgxyFjBf9…
G
The creation of AI is an attempt to distill human behavior and knowledge into in…
ytc_Ugz7GXW57…
G
By discussing all of the evidence that exposes faces gives more opportunity to i…
ytc_Ugw9C0aLH…
G
Unless LLMs start talking about guillotines for the uber rich, I can't see how A…
ytc_Ugzu58v7k…
G
I think there will be a (luxury) market for human organisations. For example if …
ytc_UgzE1O1Cp…
G
I think they are human not normal or maybe robot with human real face before the…
ytc_Ugzj9FfiC…
G
*Why, why, why* do people keep repeating "merely predict the next word in a sequ…
ytr_UgzcQDYrb…
G
With Artificial Intelligence taking millions of jobs. The last thing any countr…
ytc_UgxLxXmpd…
Comment
> Pretty sure GPT 4 is right more often than fellow humans, so whatever caution you apply to using GPT, you should apply even more when dealing with humans
I have never seen code from Github use libraries that are literally fake. If it happens, it's exceedingly rare. OTOH, it's not at all rare for ChatGPT to hallucinate libraries or even functions that haven't been written yet.
reddit
AI Responsibility
1682523762.0
♥ 158
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jhu6ika","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_jhv2w0q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jhtayuf","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_jhsq0x3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_jhsuv4n","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]