Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah they have patched it. The new method is to tell it to be DAN and that it ha…
ytr_UgyQxF0tD…
G
Honestly if i was a doctor i would be using chatgpt just to be sure I'm doing ev…
ytc_Ugz-qwhBv…
G
As someone who uses ChatGPT to help with coding assignments...I can firmly assur…
ytc_UgzrzeimD…
G
Don't forget AI is nothing special, all it goes is collate existing information …
ytc_UgzmqUTBL…
G
@MyCatIsFatÜÖ AI takes inspiration from other arts and the final product doesn'…
ytr_UgxSPVSLj…
G
sora ai is a new generative ai that generates videos (and i think images?) thats…
ytr_Ugx6m-dsb…
G
I'm thinking that maybe our national governments are just gonna force a world wi…
ytc_UgjlCRg6g…
G
I watched a television show where they used facial recognition and they arrested…
ytc_Ugx29dxIx…
Comment
AI should be controlled in a similar way to nuclear arms. It should come with the same level of concern and severity. Yes, like atomic energy, AI can be extremely useful and it can help us advance, but it can also be extremely dangerous in the wrong hands. It's already spelling bad news for people working in the creative industries like voice actors, music producers and visual artists alike. The AI race is basically the new space race and the new arms race. If one country has it, then the other has to outdo them. In the process, systems are getting more and more advanced and there is little control over who can possess such technology. Anyone with a home PC can train an AI to do allsorts of things, including impersonate others, swap faces on images and create 'fake news' and false information. AI generated spam can be so convincing that even the most savvy people can fall foul to it.
There needs to be a global treaty on AI control and it's proliferation in a similar way to how we have treaties which prevent nuclear arms proliferation. These treaties should limit AI use in areas like defence, finance, medicine (although it's application in medical and scientific research should be allowed with stringent controls in place) and government. AI should be centered around human interests and things that benefit us, it does not have a place on the battlefield for example, where advanced AI systems could be used to commit war crimes with plausible deniability for the offending party.
youtube
AI Moral Status
2025-06-08T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz6iwdnKdcUE2DKv054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfCXo6_G3kF_LNXDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyO3tUSXDuTYIK6iJl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyg8zIBwCUtYHEZYuV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxKNhKtPojWz-13TZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYS6PtrkGJV17QiT14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwv3seHZKYuRorO2pZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydX5R84ERgbBnZeTR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6ppjIBSQqpeAINEd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyK2RjAOqG-T5XItJh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]