Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What we're going to see is a new kind of terrorism with these people attacking t…
ytc_Ugy93yv5Q…
G
1:53 ummm.yes?.... you dont understand how fast AI is growing. When you compare…
ytc_UgyQiAkkX…
G
AI should not be for everyone. Any invention bring good as well as bad. But surv…
ytc_Ugw8s6O2o…
G
Can you do a Sam Altman's ancestral roots query on any AI tool? Altman has an in…
ytc_UgwpxuyHV…
G
AI sucks for now, I’ve literally told the AI solution for a problem which i aske…
ytc_UgyQwG05W…
G
Actually, more useful for an initial bio-war attack. The face recognition stuff …
ytr_UgyV6Lvuh…
G
If AI gives us a break from woke propaganda, I'm on it's side. Let it cook…
ytc_Ugwt6elkI…
G
Vertical slice architecture where individual modules are self contained and have…
rdc_oi0p3zg
Comment
If AI is us, why would it reflect only what is good about us, and not the bad? Why would it reflect love and not hatred, compassion and not greed? I'm always amazing by how all the scientists and expert talking about AI, always downplay the potential danger associated with AI in the most naively optistimistic way, as if most technologies invented by humans historically have not been used both for the good and the bad of humanity. There is a good chance that AI will be the end of human beings as a species, I think it doesn't help to downplay that risk. If we are already playing with fire, shouldn't we fully understand how much it can burn?
youtube
2024-06-06T13:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySFcSLVHtV3duGVAN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQE4GjUuElcj28o3x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx3TIFeAEmpFQctROh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwzNG7s9H4IjDi3dr14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYR4fJQt0-x_r04x14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIxd5OIP8w_23kkG54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxAFTtvg_9LfZCg4jZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgynMeD9H6jwAihxQIN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy2bn_lZqMADnxb-r54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy-IeM_lz_jQinYvRV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]