Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My push back against this guy's argument is that it presupposes we are helpless …
ytc_UgzLFydI3…
G
I dont think i agree with your later assesment. Yes, ppl do stupid things. I rem…
ytc_UgzNdV6Ud…
G
Robots are fucking lines of code and since YOU yes you the person reading this c…
ytc_UgjdJpJ3U…
G
It’s beyond stupid to think other people have to prove that safe AI can be built…
ytc_UgyW8E5eG…
G
When the A.I. becomes sentient it's going to remember how Dave insulted it and, …
ytc_Ugw6NgLW-…
G
I have noticed this too. Any negative news towards Korea always results in "but …
rdc_clvcz84
G
These are all examples of using AI incorrectly. The problem is between the compu…
ytc_UgxfE_yvG…
G
I respect his pretty consistent stance against AI even though Studio Ghibli doe…
ytc_UgyQtLPMM…
Comment
@michaeldeierhoi4096
That isn't what I was saying.
1. AI uses 3 mechanisms for its actions: Directed abstraction, Associative Learning, and brute force trial and error. Humans are not limited to these.
2. AGI theory is not related to AI theory. All AI systems fall under computational theory whereas AGI does not. AGI theory includes everything like human, animal, and alien intelligence.
3. There is research today on AGI theory. This has been going on for almost nine years.
4. ASI is the same theory as AGI. The maximum IQ for ASI is around 350, not 1 million or 1 billion.
5. Singularity, nanobots, and brain implants to make you smarter are all nonsense. They have no scientific basis.
youtube
AI Governance
2023-11-02T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzNkXg5fpJpeUimq8t4AaABAg.9wXxsqyQdPC9yQYVBtDn4J","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzi4_w55F7JmEkdmvJ4AaABAg.9wXpYyFNC9E9wcC4YcvSBj","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzGqNeYT7sqeqLTe4N4AaABAg.9wWlZoLhVz39zLbEKGIOxI","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgxhwdasSP65cHVSIJR4AaABAg.9wW2PCBrZOv9xBHLOX_WOy","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyh-gAtClVmxQVckoV4AaABAg.9wW1cjqmDkR9wcCFob6KDK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxFIRy_0yRlslYmqwx4AaABAg.9wVcbKZFBIB9wc_rUjs16N","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwurcAvZoC_eRtNFyt4AaABAg.9wUb8uqOa8b9wmixrrd4I5","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzggv6ikwgryIrFlPN4AaABAg.9wUWI_5oI939wcBgfz1ckk","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzggv6ikwgryIrFlPN4AaABAg.9wUWI_5oI939wcYtLaTXHO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzozodGAuuuDzX8FX14AaABAg.9wUV27ztHxX9wcAiRGlpuD","responsibility":"society","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]