Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Iimprovise with AI mehn, I'm a graphic designer , motion designer , AI can't rep…
ytc_Ugw24YDJu…
G
Come on, AI needs water to exist..lot of water 💦…cut water to this machines and …
ytc_UgzwO9Plr…
G
Are you even aware of the creation that YOU ARE MADE & CAN BE DESTROYED JUST LIK…
ytc_UgxKGQ_2B…
G
Interesting issues brought up. But why does it matter if a person is killed by a…
ytc_Ugj34Qf8U…
G
I feel like saying being creative and teaching AI to make art and songs is “easi…
rdc_lguo1hp
G
Hey serious question: we’re can I speak to one of these base LLM models. Please …
ytc_UgwDbs0Mc…
G
Video: "Facial recognition is more likely to misidentify people of color..."
Me…
ytc_Ugyfz5xfM…
G
If it helps, I believe Professor Hawking has said something on a similar matter.…
rdc_cthow5k
Comment
36:50 can't you kind of "hand program" a neural network in that sense by hand picking weights/parameters though?
Like I'm pretty sure you can essentially recreate logic gates with small feed forward neural networks by hand selecting values without any gradient based learning necessary. (& on the learning side we could even technically do that by hand with pen & paper if we had a sufficiently small model like a perceptron)
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]