Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Listen, I'm open minded about what can and can't be considered art. And regardle…
ytc_UgxOg7iPN…
G
It's different for one reason.. We usually train other humans to take over when…
ytc_UgwrHRsSM…
G
It’s just another method, what’s important is the end result, so if AI does smth…
ytr_UgyZN9VT4…
G
i miss when AI just means those cool, life-like robots that can imitate human or…
ytc_UgyF15Cmd…
G
> the real risk is not
I feel like any time someone makes this statement, OP…
rdc_ohxye78
G
😮 we can make AI out of human brain cells😅 with reinforcement learning we're fin…
ytr_UgxYDI8jk…
G
I feel like this can be a survivorship bias problem.
We don't know how many peop…
ytc_UgzvwazGx…
G
For reference, companies like midjourney have been avoiding legal action because…
ytc_UgwdZM5Wl…
Comment
The real issue with AI is that it knows s x+a ends in a weighted result. The problem is that it doesn't understand why it does. Just like when you're a child and learning math for the first time, you have been told 1+1=2 or 2x2=4 you memorize because you were told this I'd the answer, it's not until you understand the why that you are able to apply math to problems not memorized. Ai is not able to do this, which is far more dangerous than if it did, because you can not reason with something that has no understanding, it will proceed taking an action it will always take.
youtube
2024-01-03T20:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwJARs-r336yTk9zmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz84aLqbRacCQlAwid4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEnNwWCYzFshNU25R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxt5CXv59Dzrkhfg194AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxN0Xe8QL9WM6DaKdJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyo6JHbisUsE3lOQat4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLEYySBCqkdnBWwI54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzHy7OL4jT0RkcVRZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJ1vV5RJzr03UV6_J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz_tv9aBU9lgPjFxNt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]