Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Human beings are the problem. Thats why odds are AI eliminates us from existenc…
ytr_Ugyqu6plX…
G
There are bad things about it , for sure ! To use AI to falsely present persons…
ytc_UgyB5nN70…
G
it is impossible to regulate a.i., it is like asking a calculator to lie to you …
ytc_UgxWByo5m…
G
49:50 doesn't take away the fact that we are being reactive to this tech, and so…
ytc_Ugxxt47s-…
G
AI could help in improving recharge from 5 to 60%,
reduce agri demand by 90%…
ytc_UgwnAuvOT…
G
They are not crying over cartoons. They are crying because we all know what this…
ytr_UgzytU6FR…
G
THIS! People act like AI is this Uber intelligent living being. It's not, it's a…
ytr_UgyxuuOEX…
G
If I were an uncreative person, I'd say their defense for supporting generative …
ytc_Ugxkr-6nU…
Comment
This has never been and never will be a matter of math or any other science. A human must always choose what is worse and what is better. That is a judgment call based on morals and ethics, and morals and ethics are always changing. 'AI' in this context is a misnomer. The 'AI' or the algorithm is just a program doing exactly what a human programmed it to do, and that human made judgment calls based on personal understanding of morals and ethics when writing that program.
The real question is, do we want only one company making sentencing judgments for all of society, or do we want people from the communities affected by these decisions to be making them?
youtube
2022-07-27T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyu3lJ5jSotu-gpeM14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwEeWXibr3X0c3MlLp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy7MfwCiwLr1xFxzs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxfum2yi79CYBmvcPt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsT0OPQBFlk8UndN54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFJXsutbsCY-dF59J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9ZLn1oRLlCSv8t2R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwY-6m4IEF5K4A9EHV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzK4qdxUuBHi1o0nKJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwEBWGO2Y_kiryJ8Xx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]