Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT was made to be liberal mouthpiece, it only knows what’s been programmed …
ytc_UgwjB9KY-…
G
Vulture Capitalist Company: "We have decided that we will use AI generated model…
ytc_Ugyey8W3s…
G
China (CCP) would never ever agree to join AI models for the betterment of manki…
ytc_UgwoUx87K…
G
AI is great and i love it. If ypu know how to use it and have self criteria. I p…
ytc_UgzfoefD8…
G
6:34 i want to draw a comparison to my toyota in this section. Toyota does not h…
ytc_Ugwq6VjPV…
G
what are they talking about? their own president is making it difficult to deplo…
rdc_dkzgdcl
G
As an actual artist who has seen a lot of AI “art” online, the moment I notice t…
ytc_UgzfnKoMh…
G
If companys have to pay big taxes on AI replacing jobs then that money should pa…
ytc_Ugw9cm40I…
Comment
Full-scale nuclear warfare It is more dangerous. Then artificial intelligence. Full-scale. Nuclear warfare will wipe a 100% of all life off of Earth. How many people has artificial intelligence already killed? Artificial intelligence is not bad. If you don't believe that how many people has it killed already in real life? Comparatively to how many people nuclear weapons have killed in real life.
youtube
AI Governance
2024-05-22T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzCy4mVodcpltYNIBd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"respect"},
{"id":"ytc_UgzVg_kJMo9sWB8alqx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhCeSBWo9QcVaeu694AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzJ42gIwImPenjzsA14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJe0q6RWBOZogM-VZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzy-PRbN-kmTKXly7h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzdAWB4TCTbI6R8vul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyrLTsNHYnguIr56gx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw_40BYoOkBrOceill4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW7kBgl78BXScR5-F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]