Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here's another type of AI future: The AI becomes smarter than us and is on the v…
ytc_UgwGFk65s…
G
I love how ChatGPT has ignited this fear of what things ChatGPT may be at risk o…
rdc_j8buek1
G
I have a friend that used to work for Samsung there. He ended up moving to the U…
rdc_dv0nuqw
G
Companies jumped the gun too soon out of greed, but the day will soon come when …
ytc_UgyTxSqs_…
G
Ai will replace humans and humans will be very few according to Globalist paperw…
ytc_UgzOWpSL7…
G
Current Turkish president, as far as i have heard used deepfakes about his polit…
ytc_UgzBgC-KO…
G
@augustnkk2788Because AI lacks sentience, so the concept of “creativity” is inco…
ytr_UgzAcVOie…
G
people who thgt the robot was gonna do elizabeths death on the person
|
V…
ytc_UgxbpmOCI…
Comment
I am most concerned with possibility and likely eventuality of some person, company, terrorist, country, or whatever INTENTIONALLY creating an AI and giving it open access to world. Or worse, they make an INTENTIONALLY MALICIOUS AI and unleash it on everyone. What happens to cyber security when it becomes AI vs AI? What if an AI creates a super digital virus, spreads it everywhere it can and completely shuts down digital networking as a whole? Or the age of AI generated Mis/disinformation just absolutely ruins any human faith in communication outside of in-person communication. What happens when nobody can trust any information? How will history be viewed in the future when we can no longer decide fact from fiction anymore? Did that happen? Did it actually occur the way the evidence suggest? We will have written, audio, video, that nobody can be sure is true anymore unless they witnessed it themselves.
youtube
AI Moral Status
2023-12-18T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLIXAE66kuy75crex4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwc9eFziCJ6DGieUkt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxQLFP88T0RohpBtOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyhuEkTxbr1LB2qYrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEkPNIeuxDd7Cpsvt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxqrts_GpFbBhHyEbl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrkevO1uAIT9i4QB94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTamRd1BcGXnklRbB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwqheVuBBVy4Tlf1J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylUJQC4tzflzV7bdZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]