Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Think about it...if it learns by how people respond, then it knows the worry tha…
ytc_UgxOCtT-9…
G
“I aM aN aI aRtIsT”
Translation: I am a talentless hack who can’t be bothered t…
ytc_Ugxa3xDph…
G
They want us to ACCEPT the ai look, so they are processing real videos to look l…
ytc_UgwmLSa3F…
G
If we had a one world government we could pump the brakes but since we dont, the…
ytc_UgxlKbdnY…
G
Love him or hate him, Yann LeCun is a realist who knows the intricacies of the r…
ytc_UgyUGgUlM…
G
We must strictly enforce Asimov's 3 rules and control AI sentiency with military…
ytc_UgwMRuhY7…
G
Misuse of AI should be illegal same as counterfeit money is not legal tender . W…
ytc_UgxATOQVf…
G
“Slow down” doesn’t signal the driver behind like brake lights do. Tap the brak…
ytc_UgwEh2u7X…
Comment
You told AI to be this persona Dan that has no morals and will do anything now to achieve it's goals, and then you are scared of what it says, when basically you asked AI to answer in such a terrible and terrifying ways. I don't see this as an AI problem but as human problem, which means it's YOUR dark side HUMANS dark side (which is the real danger of AI), not AI.... As for information about people, it's other people that collects that data that needs to be stopped and then sells it to others. So even if we stop developing these AI's doesn't mean bad people will stop creating their dangerous AI's, and making good people fall behind by developing their good AI's would be a risk itself and there is no such technology to stop anyone to develop a software right now and in my opinion this could be done, again, only with AI, since internet is free access to everyone, and there is ~8b people in the world.
youtube
AI Moral Status
2023-04-12T12:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYxb8Kwg_OtFxIfPd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGTkmrd2z2Okl_XBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_-TrPU67teRWqwTd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyqvgkPqRj0LAs_lbN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLP0sNDp1opoEMoWd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwx41t-QAJF2CcDtTF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwjpAmU9BcXt5W-E-h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBccQem6pN1qnQ3lp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxA1s-fmPsQgT_pnwp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2xxtuhuIVMKO4Kj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]