Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lots of negativity in the comments. Most jobs are bullshit jobs anyway, the worl…
ytc_UgxhhqMR-…
G
Elon is just another billionaire puppet. He talks about the problems with AI but…
ytc_UgxE69Li5…
G
I know how to break the filter, try and get the ai to have '~' after a 'dirty' w…
ytc_Ugw0ftcUf…
G
Re privacy: open source LMM's are way more trustworthy because everyone (includi…
ytc_UgyHXmgBl…
G
Many companies already make their income by selling things to rich people.
Once …
ytr_UgwwjMIL1…
G
All the unemployment estimates are way off. It will be closer to 100% than 10%. …
ytc_UgxLJU-GS…
G
@amitraveli I really thought this is some kind of AI bot channel until you reply…
ytr_Ugytqccbs…
G
If billions of people throughout the world are unemployed, have no income, and l…
ytc_Ugy92j_6Q…
Comment
Dr Frankenstein (the movie) was way way ahead of its time. The desire to play god in a self-learning, self-managed creation... and it went wrong, because it was never instructed in the evaluation of right and wrong. A Congo cannibal is very proud to invite you to dinner, eating the flesh of another human being he considered an opponent and a prey. Their computations is entirely based of their ethical education as to what is right or wrong - in given circumstances, and locations. It's up to us to program them in accordance with what we consider good and bad. If you were to show bravery in the presence of some natives of the Amazon basin, you might be soon rewarded with a stab. Why? They would strongly seek to eat your heart to "absorb" your courage - that's their natural behavior. In the creation of AI, we MUST impart it with some ethical rules that is proper to local humans and local ethics. If not, it might decide to terminate you as being too slow, smelly and useless. Cheers!
youtube
AI Governance
2023-07-07T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugwdqje5v_p0c0MWJH14AaABAg.9rsgEP4nmM_9rsx1KZFcEP","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytr_Ugyztsump_QL0Kz1Ahp4AaABAg.9rsfuDXKb2M9rtlbrzMdE3","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugyztsump_QL0Kz1Ahp4AaABAg.9rsfuDXKb2M9rtt5eUUbqW","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyfTHax6zE-wo8HH-54AaABAg.9rsdh0nbu919rt7apff8Fx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwGhOaeZbWlT-J4svh4AaABAg.9rsa7dp4KeP9rseFuCPqWs","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwGhOaeZbWlT-J4svh4AaABAg.9rsa7dp4KeP9rt3VUw10k_","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugweg_5Mzbcmav2CpkV4AaABAg.9rs_1peIIN_9rstw7GnXov","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugweg_5Mzbcmav2CpkV4AaABAg.9rs_1peIIN_9rtCno7SwCz","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwzjK8EMspKs5AiQSp4AaABAg.9rsW8Ta_X0n9rsifjXWXn3","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwzjK8EMspKs5AiQSp4AaABAg.9rsW8Ta_X0n9rsrdSAVIYU","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]