Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI Still need humans, they might be smart and control all electronics, but we ca…
ytc_UgxkUPx6l…
G
What If Sam, Elon and the other big players are just shape shifters? When Cerber…
ytc_UgwITF4vE…
G
I imagine AI is like a child, you have to raise it correctly. Give it time to un…
ytc_UgwPdF0gK…
G
They want AI and Robotics so bad because they don't need to pay them they don't …
ytc_UgweBQPoY…
G
lol This is where I laugh at Trump supporters. They love to go on about how He i…
ytr_Ugh2uxIcM…
G
AI Artitst are people that saw a parody of an artist in cartoons and try to impe…
ytc_UgxoBie52…
G
How are scientist trying to find that missing link that humanity has (emotion, k…
ytc_UgzsPy2Ld…
G
ChatGPT can't win, yet. But other "AI"s aka chess engines or Deepmind are alread…
ytr_Ugzma0KP0…
Comment
This video is a case in point as to why I don't see chatbots accomplishing any wholesale substitution of jobs as people currently speculate. In short, a computer technology is not capable of substituting for an entire human, as humans behave in a non-deterministic manner in their decision making, while real computers can only make decisions in a binary manner (literally, they uses pulses of 0s and 1s to encode information at the lowest level, from your simplest devices to the almighty mainframes that do obscene numbers of computations per second). No matter how advanced processing technology gets, it will never meet or exceed the processing capability of a human brain, and the brain is a force to be reckoned with on its own as it is basically the father of all technology known to man. Another thing that is interesting is that there exists theoretical computers (like the Turing machine) that behave non-deterministically, but that can't possibly be built on actual hardware.
youtube
AI Responsibility
2024-02-18T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgypNYedn2sp8DJJeo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLE9J43zlEwVIzAq94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEYJd7ODSEN0VTtOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxNC0E0KqRzvMvFtsh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkVdA_YLOEHZjvB754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzaYOQA5PjuIOqEX4h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxK4QwUyqEzPQH-lzd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdP-mYV7VjS-kCerZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQ4iT4NQIsEumXFfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxR8YvAYE9DLQg5iJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]