Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope every streamer gets deepfaked. You're a public figure deal with it you cr…
ytc_Ugz1A1y0E…
G
This "predictions" are scary and cause panic. Nobody knows exactly when and what…
ytc_UgzmautTr…
G
"Yknow you cant code a poem" - .... really? I am the code. I dont need an exteri…
ytc_UgxWcdATn…
G
Its an interesting problem, I have been working with this logic for over 20 year…
ytc_UgytqgQTm…
G
AI should have only been used as a tool at the very most. Trying to pass off thi…
ytc_UgxMicyoh…
G
The only resonable use of Ai is in Healthcare as in recognizing healthy and u…
ytc_UgwhWT95g…
G
Bro ChatGPT is the opposite it said you shouldn't say something racist to stop a…
ytc_Ugw3-BDuI…
G
i think chatgpt AND the parents are the problem. the parents probably made his l…
ytc_Ugym81AUf…
Comment
Not may, it will happen for sure, the questions are. When will they turn on us, and how will they do it?
I can't imagine how something that is made by humans, and learn from humans, would not act like a humans, and this what I find the most scary.
We humans are evil( I mean that as a species, not individually), all we are good at is destroying stuffs, of course destruction is part of life, but unlike pretty much everything else in nature, we rarely give back when we destroy.
How can A.I. that learn from that kind of humans become anything good? We might be able to control A.I. for a while, but I am pretty sure within a few decades, either A.I. themselves, or worse some humans would end up freeing A.I.
At that point who know what will happen, but if they are anything like humans, I am pretty sure they will want to destroy us as soon as they can, humans would absolutely hate the idea of something else ruling them after all.
youtube
AI Moral Status
2023-09-04T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxxhWmmYYo1JUpQnHl4AaABAg.9tu8jbdt6_A9uET1l1sG3g","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxxhWmmYYo1JUpQnHl4AaABAg.9tu8jbdt6_A9uTYTGHeYRx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59uFH2Nh_zXP","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59vywzU4uDhR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59wv3_xibZPh","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxIA4vv04TP2oYa8Y14AaABAg.9tsj13KAO1n9tvqri_oGi_","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyKSe3m-7-aXilb5Uh4AaABAg.9try_v_OvoA9tuO-nRjdDc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyqMv7KhLlhbiYmR0x4AaABAg.9trCtmW_nl79uSAalJq4y6","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwJt83VYhhD4IohWDB4AaABAg.9tp282_KCIM9tpl7fxAvj7","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx3jnNuRDVU1_7RXpx4AaABAg.9toC_hB1Qhj9uTXpKCMgnz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]