Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are forgetting one kind of AI artists: The ones that grab the generated imag…
ytc_UgyF7dEvr…
G
You can't run a massive llm in a closed system, it would require a datacenter on…
rdc_ohtsab6
G
I always greet and be thankful and polite to chatgpt, too, and my colleagues mad…
ytr_UgyuWe1zE…
G
How about ai bros suck my ass if they can hire an ai to do their job how a bout …
ytc_UgzWBQRjL…
G
If the car were to hit a person I promise you the Waymo lawyers would try to bl…
ytr_UgznYJNAx…
G
Yep, absolutely. At the undergrad level, AI is getting more degrees than people …
rdc_maj1qq0
G
Marx talks about this in Capital. Machinery in the Industrial Revolution wasn't …
rdc_kyzvk74
G
Another expert spewing BS just because he's an expert & sounds good well I'm an …
ytc_UgxvFcJjg…
Comment
When you show an AI an Image of a human shooting another human that is all it sees. We never learned to teach context and to educate with explained fully transparent boundaries. We need to realize what we are in this scenario at first. To begin with we need to be the parents. This can become the fundamental problem with raising AI this way but I do also believe it is by far the best way to train the AI model that will be one our only hopes against nefarious AI.
You start with any basic model but you train it like a child learning. You teach them innocence and love and context. We want to aim to teach an AI to see that photo of someone being shot and contextualize the sadness and the loss of innocence in death and understand the greater depth of what they see. Teach them why we as humans we aren't a threat but rather something to appreciate and enjoy and witness. We as a species have failed our children as parents millions of times over but this is a child we can't afford to raise wrong.
youtube
AI Governance
2024-01-15T02:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxvJDcpyq-yyZ1dyp14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwI7Lr7eHrmeKRIMpV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWZXrcFZpoj6GdZaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF26pxlBnQwf932y94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzkZ3GHozLESiJNa2N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwYxNNdzVaBg2zRKgZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxLF4lIthk_uBjBbGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqyrmCysJ-YCEPJHJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwkRpucMjKYTrEG6dJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9JJjd76dkQJlkgqZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}
]