Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here are a few things I want to talk about:
1. AI doesn't have to learn like a h…
ytc_UgxdZShYh…
G
I don't mind A.I. art in general if it is just used casually and used as a refer…
ytc_UgxZZ8t_z…
G
I'm also in an adjacent group, geophysics, and I can tell you AI is useless for …
rdc_nnurg1e
G
i use ai sometimes if i don't know how my drawing could be improved... I still d…
ytc_Ugz4xYh85…
G
Yeah, I feel like for junior programmers it is going to be a hurdle for becoming…
rdc_jigcca5
G
3:12 Apple Crap AI is mentioned like 10 times and Mistral is not mentioned at al…
ytc_Ugz915MNG…
G
I don't think ai is bad, it's a tool, not a weapon, but it's how it's used, usin…
ytc_Ugypb0Oa7…
G
Those companies all use Stable Diffusion. You only to pay them so they run Stabl…
ytr_Ugzko3AnI…
Comment
@cchris874 Well, first of all we don't really know what exactly consciousness is, but from an intelligence standpoint, machine intelligence using deep reinforcement learning has proven to be quite effective at recognizing patterns and making intelligent decisions. To answer the questions you put forth above (specifically the one about a model for transforming silicon into consciousness) I would argue that we do have a model: deep neural networks.
Over the past decade, we have made great leaps in our understanding of both cognitive neuroscience and deep learning using neural networks. We have fully mapped out and simulated the brain of the C. elegans worm, we have researched cognitive maps in the hippocampus and built Tolman-Eichenbaum machines, we have made great strides in image processing with the advent of convolutional neural networks, we have experimented with different reinforcement learning techniques such as DQN and PPO, we have used deep RL extensively in the field of robotics and have trained ML models to play many video games at human or above human level such as AlphaGo beating the world GO champion and Alphacraft proving to be effective at strategizing against human opponents, and finally we have collected massive amounts of data and used it to train larger and larger LLM's such as GPT-4 which has shown to have near-human reasoning skills in many categories.
From my understanding of machine learning and cognitive neuroscience, consciousness (whatever that might be) is not binary but rather a spectrum, and I fully believe that current ML models exist somewhere on that spectrum, even if they are just multilayered multidimensional tensors multiplied together using linear algebra and trained using back propagation calculus.
So, going back to the three questions above, the answer to all of them (in my opinion at least) is yes, yes, and yes (though the code hardly matters, it's the training data).
youtube
AI Moral Status
2023-05-22T18:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzwTRaYUjVnwDc3lsN4AaABAg.9up5lwPGrjs9vVH-zBG6Ud","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzEFg39EsPysHNc5St4AaABAg.9rAMnTyDjNgA3uKRdIO6eJ","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxQccf-8B4TUrqGyMl4AaABAg.9pWVj0tUQ659q14WXqm5ph","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw-oRlWGSYn-QvXvJd4AaABAg.9pR78enMEBd9pftDObyFG9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyGhxmohp6NLWlX-y14AaABAg.9p2EKfuLRIi9psFg5fP4NB","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwzrK8GJXk6qhNKWaF4AaABAg.9l0fzL_YZ989xPLp1Fc2bg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzuR-nEuxysUKMP7Z54AaABAg.9kcCNArsIT-9keuNEBXZIp","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgyFIflQZQnAm-bPG2R4AaABAg.9jyqnTNMsU_9o7cFF4aHkU","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_UgwwSRK1vI-yuESCN7V4AaABAg.9jlOW_XL-Z49kN3L6nup0c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzzUkbf2TBm2KpnI9R4AaABAg.9jlDr304qci9o7b8IS77hO","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]