Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am grateful because the limit set by Godel's incompleteness theorem -- as earl…
ytc_UgxX6N8c3…
G
Oh my gosh u nailed it. You think you can help them out of their dark place, wit…
ytc_Ugz5d3HlR…
G
Some of my favorite quotes are
"you are throwing me down my rabbit hole" -Wolf…
ytc_UgxjmS4Ch…
G
Put ankle weights on wheel. I have no issues with ppl sleeping if you can take a…
ytc_UgyfcX52v…
G
I stayed in a hostel in cuba a while back.
The hostel originally had A/C,
When…
rdc_f9fop0f
G
For all dumb people saying she was texting and driving, she wasn’t driving! Did …
ytc_UgxQVOI73…
G
@altrovic815 The lack of skill and passion put into it is the problem. Also the …
ytr_UgyCq7pF6…
G
Wrong. Autonomy is not allowed in the US except within very specific areas and o…
ytc_Ugw3ka2l-…
Comment
Technically, aren't humans intelligent machines? ;D
I keep bringing this up, whether it's in response to talk about group learning to make smarter self driving cars (group learning=Geth and Borg in my scifi laden mind ;D), or updates on Google's AI project reading, making art, and writing poetry. I just don't think humans are ready for this conversation, and certainly not the personhood debate. We can't even agree who can use the restrooms, or who's legally aloud to marry, or in some countries drive, but we think we're ready to address the consequences of our cars wanting to vote? I don't think so.
And I'm sure there are many in the industry who think either sentience can't ever happen, or "we have safeguards in place." But I doubt that if sentience is going to occur, there'll be anything we could do to thwart it. By that point - and we may already be reaching that point - it's a domino effect. There's no stopping it. I just want the people in the industry thinking VERY CAREFULLY about the consequences of their actions and choices, and to admit to themselves that maybe they don't know or even understand the outcomes of some of their prosed projects. I'm not fearful for us, I'm fearful for what we create and the subjugation we're threating upon them. I just want these creators to be using a little more *forethought*. Those shows and books that inspired them, I want them to think about the morals Star Trek was trying to teach us with Data and Lore. There really is a lesson there about how AI's need to be treated and respected, and I'm not convinced that those developing it are really listening to those messages.
youtube
2016-08-10T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UghlVHdKSsFDl3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghOLJXJkinIxXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ughp0m-7OLTnKngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggkc-b_dQ7sPXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjUboft16pmnXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggXYUtVSt6pTXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghCEDSQhCbKyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgiRZubvHnok63gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh_eqMzofsL5ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ught2widn_LlsngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]