Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Of course not. No AI user has the intelligence to make a logical leap like that.…
ytc_UgxCpk05a…
G
I definitely agree AI shouldn’t take art from other people, but I would like to …
ytc_UgyfZM4Y1…
G
There will session musicians less happy with it to be fair. We’re all in the shi…
ytr_UgzMmLfe5…
G
Displacement is typically an issue only for the current generation that gets pus…
ytc_UgyyON8Nn…
G
Thank you for this video, you do a really good job explaining why AI image gener…
ytc_Ugxs8fT9d…
G
If you have ever driven a big truck through any big city, you know their plan ca…
ytc_UgwaJYpir…
G
your glasses are too visible robot man, i can see the darkness in your eyes too …
ytc_Ugy42Sk4H…
G
I think what we should do, is hold AI models accountable for the things they do.…
ytc_Ugy8usv7M…
Comment
Hello Professor Hawking and thank you for coming on for this discussion!
A common method for teaching a machine is to feed the it large amounts of problems or situations along with a “correct“ result. However, most human behavior cannot be classified as correct or incorrect. If we aim to create an artificially intelligent machine, should we filter the behavioral inputs to what we believe to be ideal, or should we give the machines the opportunity to learn unfiltered human behavior?
If we choose to filter the input in an attempt to prevent adverse behavior, do we not also run the risk of preventing the development of compassion and other similar human qualities that keep us from making decisions based purely on statistics and logic?
For example, if we have an unsustainable population of wildlife, we kill some of the wildlife by traps, poisons, or hunting, but if we have an unsustainable population of humans, we would not simply kill a lot of humans, even though that might seem like the simpler solution.
reddit
AI Bias
1437998319.0
♥ 1689
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_cti1yju","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_cthnoeb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_cthxc0i","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_cthtjt1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_cthrpzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]