Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if you make AI it should not think for itself and not being able to take decisio…
ytc_UgxWXmn27…
G
I've taught at the college level for over 25 years. As an experiment, I gave Cl…
ytc_UgwI4T0ra…
G
The same in any other field of study. Ask ChatGPT to write an essay on anything …
ytc_UgxI4hpFN…
G
Wow good chat 🎉
WHAT makes us different and special as humans is our 💛💫 this is …
ytc_UgyiGjJ11…
G
My husband and I both make AI for fun and what I haven't seen anyone talk about …
ytc_UgzqYjCL-…
G
One good thing about these abortion laws is that now since ai is sentient and al…
ytc_UgxMARDTe…
G
A store I frequent has a robot that checks the shelves and counts product. This…
ytc_UgzWso8zw…
G
It's being quite transparent about using colloquial phrases to facilitate conver…
ytc_UgwkNHYCm…
Comment
Okay what he's saying here is purposely worded like this so he "scares" more people into resereach , but let me make it simple for a lot of you out there because most people don't really understand what AI really is.
AI is INCAPABLE of taking control , it's just a human program and what matters is HOW AND BY WHOM it's programmed.
What you guys are scared of is a concept called AGI ( artificial general intelligence) , now this is the type of hypothetical program that could take over control by itself but we probably won't ever create this type of program because it requires sentience or rather consciousness .
So we should not be afraid of AI as AI is merely a tool , it can be used for good or for bad .
youtube
AI Responsibility
2025-07-24T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyImi6Dzmb4Q9Funb54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgymcRcrztIeIbopWVZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz5td3W_GG0gCpq5jp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_8OVng750eYRhyVF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyKWTk4N-WX_b6DBmB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwQAwfEFNGrC6O8NDp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxp96y0Zmkh3udsduB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRlQWVj9IiOt_lrKV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugys9iRln13TyHzxj3l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwJyqcf39wLd_Qsbnd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"})