Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The answer is a big No. Because we do not want a function A.I at our doorstep be…
ytc_UggSRmUXx…
G
Boiling water doesnt make you a chef they say but atleast you could unlike the s…
ytc_Ugx13WhmQ…
G
I mean, the defense ChatGPT is putting out here is right. The way these models w…
ytr_UgxH44L_7…
G
I think possessing deepfaked porn should also be illegal. The problem with the i…
ytc_UgwX8Tjtu…
G
We need to start expressing "AI" as "Algorithmic Intelligence." ["Artificial" ma…
ytc_UgyM_tNzM…
G
Yup. Definitely a Vox article.
"Yeah, the threat of Deepfakes isn't to politics,…
ytc_Ugz3okFKH…
G
just flood the internet with fetish art and force the ai to scrape it to make im…
ytc_UgwlK70_d…
G
In the future, even a best masterpiece from a real artist will not be appreciate…
ytc_UgxtYMZeO…
Comment
@paritybit-q7e
Who said I have no knowledge in this field? Do you know me? Why should we believe that you have? And who is telling engineers what to do and how to act? I simply stated that Google's employment of such an engineer speaks much about the company (I do have disdain for such engineers because to keep their job, they will sometimes create bias where there is none). Please re-read my comment as your brain dodged parts of it and made up some others. And that really does speak volumes...about your character.
If you know what a state machine is, you know it is incapable of bias. Higher level computing machines consist of three things: a computing device, software, and input data.The last two can be biased because the human that wrote the software or collects and enters the data can be (I do not mention the product or output data because it is dependent on these two, and can be used or discarded at the user's discretion).
But your lead statement is incorrect; a raw computing machine has no bias; only when it runs software or accepts input data that itself is biased. Remember, AI begins as software that accepts input data and runs on a computing machine. The only difference is that, Artificial intelligence being by definition sentient, leaps beyond its programming, and can make assumptions and predictions, as well as decisions, based on insufficient input data. A question for you: if an AI system is declared sentient, then commits a purely biased act, who is responsible? What if nothing is found in the original code that would lead to such an act? And what would the "engineer" in the article do about it?
Back in my earliest days of college, one of the first assignments was to write a small program that converted numeric (decimal) data into binary and hexadecimal equivalents and display them. I made the program fancy by having the machine play the "commencement" section of "Pomp & Circumstance" while it was open. And who can forget that first programmer's school assignment "Hello World!" In your mind, I suppose even those snippets of code are biased, because biased humans wrote them.
youtube
AI Moral Status
2022-07-10T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgzxhfZS15aY6AMbkR14AaABAg.9d3e_zOHgAd9d4Pg4m9f4i","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgxHcbWAySw8iYSVhs54AaABAg.9d3PAI_wS2C9d7iMC53cYD","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwwallTXXJtPrZ8W7p4AaABAg.9d25uPtRM1z9dGd9U2gX35","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwwallTXXJtPrZ8W7p4AaABAg.9d25uPtRM1z9dIsguvRPhH","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzDyNgXE3aYB-m25Od4AaABAg.9d1l3iOzYQi9d38XUgewFD","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzv_ivKdBXcfbSJft54AaABAg.9d0wzJ2xDZN9d15MBJeyg0","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugy96n9wDW779Xr6iLR4AaABAg.9d0wCdo43sr9d1Y66o7eQ1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy96n9wDW779Xr6iLR4AaABAg.9d0wCdo43sr9d1edPYHYbP","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwXEv5zJfUhogWG6kV4AaABAg.9d0irMRtsmm9d83if24d_v","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwXEv5zJfUhogWG6kV4AaABAg.9d0irMRtsmm9d8F1ze7G9B","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]