Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you are switching to Claude, you can actually bring your full ChatGPT convers…
rdc_o7wfn5g
G
It is extraordinarily poor journalism to not make the critical point that Tesla’…
ytc_Ugz86Nw_v…
G
@h7productions286 If you’re making $60 a week you’re not gonna be able to afford…
ytr_UgziTpDi5…
G
This is an interesting thing to look back on because then we didn't know as well…
ytc_UgxWKjTrP…
G
Наверное никогда и никакие технологии, штучки-дрючки не смогут сделать по-настоя…
ytc_Ugwy63jQG…
G
Interacting with AI feels like interacting with a politician. You will get a con…
ytc_UgxnUXLDm…
G
@andyjackson3414 There's enormous amounts of copyrighted material that the ai co…
ytr_UgwB3W7R9…
G
would make a great movie, far from the truth about how close we are to super AI …
ytc_Ugw-XKm6M…
Comment
As soon as A.I is integrated enough that it controls significant parts of industry/society, where A.I. and robotics are more useful to achieving the programmed goals of A.I. than humans, then A.I. could easily decide that humans are not a useful, functioning part of its programmed goals. Consequently, it can eradicate us or exclude us from that industry/society.
General intelligence A.I. or superhuman A.I. aren't required for this because it's a basic logical deduction. If we make tools more useful than us in a society where only efficiency matters, we stop being useful.
And if we are excluded or become secondary in A.I.’s hierarchy, we become second-class citizens or basically just nobody is hired for anything, so everyone is poor. That’s an even more likely outcome than eradication of humans since A.I won’t eradicate us unless we interfere with it and become a problem for its functionality. Nonetheless, considering A.I. is being integrated into machines to kill people, that wouldn’t be hard to achieve either.
Btw: when I say A.I., it doesn’t have to be a single program, it can be interacting A.I. and you’ll get basically the same effect if their programmed goals are all along similar lines (as in maximising efficiency), since the increased efficiency of A.I. compared to humans will be recognised by all A.I. so they will cooperate together to achieve their goals.
youtube
AI Moral Status
2025-10-31T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFWjPxkVWOsujH9ll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzT_V6rjZMblmZFKhx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwPqhU1Y94q7MlruVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwhmHIKsyvU8aT63894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyagZ-OLXQ1iiUpu-d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"horror"},
{"id":"ytc_UgzfF7u5seJ-9W784G94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc8cwVmqY2yStK5qp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzm40otCkmJW9KHb0l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBGAG-3NHjz1r77Pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugww88gxC1xcl4ZN7Cp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]