Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
IMO, using AI to make images is alright, as long as you’re not claiming that you…
ytr_UgwaCNMmu…
G
I think some of the early things on his list were helpful for basic AI images an…
ytc_Ugxd9yjVD…
G
I think that if taken seriously this is hypocritical for Elon to ask. FSD is pr…
ytc_Ugwprg8qt…
G
AI is a force multiplier: it can make the world better, or far worse, depending …
ytc_Ugy7r7ecf…
G
Thank you for sharing your story! I know it wasn’t easy. I’m battling my oldest …
ytc_Ugza3zzFE…
G
Small wireless earbuds, and a talking life like AI operating system on our phon…
ytc_Ugy5vDQWG…
G
@JJ-ue737so it's the time you value not the creativity. Prompting your vision m…
ytr_Ugy9c4JE-…
G
Saying prompting an AI makes you an artist is like saying pulling the lever on a…
ytc_UgxoCeFd9…
Comment
I feel like there's a lot of "constantly calling it AI makes people forget what it really is" going on. A lot of what what they're talking about seems to be about people interpreting the output from AI with the assumption that it is intelligent.
Also, the whole "10% probability of killing us all conversation" is silly. Tons of inventions started in the same exact space. The first bridge ever built probably had a 90% of failing within a couple of minutes. When the Wright brothers first took flight there was for certain a fairly good chance something would go wrong and they'd crash. Comparing it to consumer level implementations makes no sense. Making it 99,999% safe comes through development and iterations, it rarely starts like that.
youtube
AI Moral Status
2025-10-30T20:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZodMs5G-ScGJk6NJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzy2yuIDLIM_CF3lWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmsPNYieT5vHcwryp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzU0EmuZ55E9T-ENeh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyacT5WrdoN34gM1jJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmSr8MLRJYI62U8XR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyaNycF917xJFuPJzN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEgAFkRq2FvPFgBpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxsez32VUPDBBhb8xl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwe3oAVH-HeJftRBHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]