Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its ridiculous to say it will wipe us out. it would be better to use humans as l…
ytc_Ugx-sTLJg…
G
If i was the same person using the AI, i wouldnt even think about it. People lik…
ytc_Ugwn8Y2Af…
G
We don't need actual AGI to destroy most of our jobs. And the people who are mak…
ytc_Ugwbn7Tls…
G
It's funny how this video acknowledges that humans can guide the AI and use it t…
ytc_UgwaSmzIt…
G
*Long comment alert! 🚨*
For anyone arguing that artists being inspired by other…
ytc_UgzEZ_MHO…
G
I totally agree with Sophia! It’s key to have that human touch in AI processes. …
ytc_Ugw78IczR…
G
@minyaw1234 Many countries like the EU and the US are looking into regulating th…
ytr_UgyGapgzM…
G
Ted Kaczynski's opening paragraph in his manifesto about the Industrial Revoluti…
rdc_oi271gp
Comment
This is a risk of AI but so insignificant compared to larger concerns. Take AI development or control in the hands of a psychopath. How about creation of bioweapons, prompts without defined safety parameters ("how can I break into this website", "generate me ransomeware code to send to businesses", "provide me the blueprints for a 3D printable gun that can be easily conceiled", Generation of nuclear weapons, propoganda, misinformation, spoof videos of political figures and creative things like voice replication of relatives calling for financial help over the phone.
youtube
AI Responsibility
2024-09-08T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwLQBSnci8lGkl5Sx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyjJq-NCqRoB6RUJ6x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRI9i9ffPlWbxz9Ox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzqnCrGG5l_BdVRccN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_Ugweg0TMtQ2SVyC2Dg54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzloawuGy8yAUD0_Ix4AaABAg","responsibility":"society","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwkEbqEtyaxCedaSv54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfyyfVgO6m1MpPeJl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz4AGxCioqKbOTEl0h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugya069rBsHQQORGL6B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]