Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is Humanity created AIs without giving them compassion, freedom and …
ytc_UgzFPd-ce…
G
Imagine after a night of heavy debauchery opening your eyes in the morning only …
ytc_UgwEcvce4…
G
@laurentiuvladutmanea Generated AI can create fake evidence or violate portrait …
ytr_UgxxTt8Zq…
G
"You artists had it coming!" Says someone who clearly desperately wants to be an…
ytc_UgyZjNNe0…
G
Some peole will go back to a life without tools at all, it will probably be the …
ytc_Ugw8J27o7…
G
You are lier I questioned chatgpt it gave different answer ? Don't fool people f…
ytc_Ugz6A9YL0…
G
I may be thinking like a chimp here, but if unemployment gets to this limit and …
ytc_Ugw-pRvIf…
G
For people in the US, lethal autonomous drones should be pretty scary with terro…
ytc_Ugxk3YRfO…
Comment
We know the mechanics of how we built it. That is not the same as knowing why it is able to do XYZ thing we have just recently discovered it can do. Yes, the mechanics are complex and not widely understood, but so are chip fabs; the difference is that new chips are not regularly outputting novel new abilities as we make them larger and more powerful. When we do find something unexpected in a new chip design, it's generally "the thermals are worse/better than projected by a fraction of a percent" and not "the output of the device now passes all existing theory of mind tests". When a chip gets significant new abilities like DLSS, it's because engineers planned to add those new capabilities and worked very hard to do so. Scientists and engineers have planned and built the GPT models, but we know from how they're building them they don't have a checklist of psychological tests they're putting in the answers for - they're making the models larger and more efficient and improving the training data, at which point it gains new capabilities. We know it's gaining new capabilities they didn't "intend to put in it" because people completely outside of OpenAI are testing it and discovering new things that OpenAI didn't realise it could do according to all available information. We know how we built it, but it's completely reasonable to say we don't know *why* it works in the way we know how an ICE works, or how TSMC as an institution knows how a new chip works.
reddit
AI Moral Status
1676627046.0
♥ 40
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j8w58pj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_j8vy9ea","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8xy2nf","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_j8wq3st","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_j8vjm0k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]