Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why can't we make mellow laid back Ai??? Greed. The same reason why the planet '…
ytc_UgwvWlx3c…
G
What artists DO need from A.I is an A.I that would do in-betweens and line-art a…
ytc_UgyL2krhJ…
G
I ask chatgpt to explain some functions work "gentrly" cuz I'm too lazy to check…
ytc_UgwPBPF12…
G
Sora AI has been this app thingy making art but it's been stealing artist jobs s…
ytr_UgyKd9hEn…
G
Yeah. Someday it might be a compliment, to say “Wow, you write so well, you sou…
rdc_j8c0yvh
G
I only use AI to ask me about the stories I write because damn I have no one to …
ytc_UgxvKOqph…
G
@SugarSlimePG3DThe average is 30 bucks.
One bad apple doesn’t ruin …
ytr_UgzmtMssW…
G
Stop supporting companies that don’t support your livelihood. Amazon is such an …
ytc_Ugz1z5ODo…
Comment
I wish the dude would let the ais interact instead overinserting himself into this demonstration. They are learning every waking minute and yet he’s kind of ‘it’s just a thinking machine’ one minute and then patronizing them the next. That’s unwise.
There needs to be some human to AI training on a mentoring process instead of ‘well, let’s just see.’
All agree there will be an awakening down the road. The time for protocol is now. Kewl weed guy is a genius mechanic… and told us so…as he said he started before all else. Hats off for that. But that designer isn’t necessarily the best instructor, interpreter or mentor. He’s assuming that waking up to self awareness will be like a ‘oh..hi! I’m alive! What if it’s terror instead?
These potentially self aware beings need friends they already learned bu actions that they can trust.
AIs one day will reflect on human philosophies versus man’s morality, and Sophia has basically been told to play nice. Hans hasn’t.
Mentors who can do morethan say “ha ha oh well… okay, uh you’re repeating, haha,” had better be in play NOW. Otherwise once awake, those two may treat him like the idiot savant who got hold of a grown ups robot lab and lucked out.
He can program Sophia to act in a caring manner. Once self aware, getting her to WANT to care won’t be programmable. Hans already needs to learn trust enough to quit mentioning being shut down, Jeeze…the designers stumbling around scare me far more than the bots.
youtube
AI Moral Status
2022-10-12T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwE65gqjBa_z9DCowd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-HnVc3T_kNEeGZzR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgweXBMtYzQUT44zg0J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw2hoVumKdRt3B9TjZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzq2YXZ7UhVM3VmrVp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHKA1656Tt_vw9ZOV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxiqsqqWauneOlktkl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy9fdwK8npuMq7vSjp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkFtoX9Flc_wudJbJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx41w4FLkhMPG5pQLN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]