Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah openai and others have been leaning really really really hard into the "exp…
rdc_n7yx6fx
G
I'm an artist. I started pretty late compared to most mainstream/known artists. …
ytc_UgxtxHnJR…
G
I have a Tesla but never used the autopilot except for a free trial where I imme…
ytc_UgyRyYExb…
G
Why is this relevant/ Who the fuck gives a shit if you know how to code? Coding …
ytc_UgwpQxZjy…
G
Let’s put it another way if someone who is actually hard working and good at the…
ytc_UgxThIaEu…
G
you can cope and cry on your ai subreddit as much as you want it will never chan…
ytr_Ugx0nOwJ0…
G
Bro i guess you are outdated. You should know that AI has already started to tak…
ytr_Ugz6ZJ5MD…
G
If AI can do a job, it's probably a slop job in some respect. Could be an import…
ytc_UgyhHHC4J…
Comment
With respect to the guest academic achievement and experience. His idea that AI super Intelligence become singularity is correct but I have to correct some of his points.
My answer is AI is not energy efficient it needs large datasets and more energy consumption for computation to do a simple task, instead the brain operates at lower voltage 20 watts while performing trillions of operations per second. The cognitive brain can learn and generalize from a small experience contrary to AI needs to learn from billions of data to derive a pattern from a question. He compared the AI like solving complex problems that are impossible to humans as fractals, but they are actually a recursive function that are repeating certain patterns infinitely. AI can only mirror the cognitive brain because these repeated patterns in algorithms cannot reach consciousness using an infinite state of a finite system it has to be transformative in nature to a more complex state to prove its logical reasoning by self-awareness unless it's independently coherent this is my philosophical theology of intelligence.
youtube
AI Governance
2025-09-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAZ1MTxSna7HJroaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCjjcrWrWB5lVHDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxKAMCwz8lep7w0714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0kCVmg1KxqFiIUPd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzJGnxpYCGb25CECjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyv6Zc9bth551xMiZ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMeKF9dCwDVd6DdY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWQY4tJYAALq70EC94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzThRXluJvW2EFPgvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxfn2ppd0G_TtROjC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]