Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why does it feel like these predictions always focus on one variable? The human …
ytc_Ugy2TO1b1…
G
Im glad other people feel the same way. When I find out its AI I immediately los…
ytc_UgwnYoYLJ…
G
These "AI advocates" are dust.
I don't want to see some disembodied notion of "L…
ytc_UgxqbQmIB…
G
You should say that AI can’t capture drama *right now*. In 5, 10, or 15 years it…
ytc_Ugz0XUzZ0…
G
Let’s be forreal, with AI putting millions of people out of job. To think UBI is…
ytc_UgyRZGHyB…
G
It is notable that we judge artificial 'consciousness' only in reference to our …
ytc_UgzB0eF6_…
G
yeah at that kind of speed i dont think our AI could detect a wall in front of y…
ytc_Ugza7iyda…
G
And AI will write a code that can't be altered or changed or someone can't over …
ytc_UgzeqReFl…
Comment
Smarter than humans? ChatGPT spits rehashed web content back at us (really really well) - it is not intelligent or self aware.
It has the same IQ of any software humans have every written. Zero.
Having said that, mad things will happen as it is unleashed (which it will be). But talking about it with words like "smarter" is just mystical thinking.
youtube
AI Governance
2023-04-19T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpESE2NiPcWRtLkHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywWb9b3XvXm7Sf0iB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxxmjpGb3AezvaVNzl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJcsY5ewsABmgh6wV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws5fNn_M_0i8wMrop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_87XmoXnYAy9ZR6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz21tMrGKtY57o93tR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYBGKjAr6akRbtJ1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQ1OTL1d0Lp-T3dmN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxvyRIPWSk1qxHXX9R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]