Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5:10 NO WE ARE ANGRY BECAUSE YOUR STEALING OUR ART AND MAKING A CHEAP PROFIT OUT…
ytc_UgybhMN-f…
G
100% agree. We've failed to predict pretty much anything that comes to pass. Or …
ytr_Ugwt9Ps-W…
G
For nearly 25 years I had no idea what I wanted to do with my life, no idea what…
ytc_UgxlLITCQ…
G
However, already in movies Rashmika has appeared in lesser sized kerchief sized …
ytc_UgzJ_yM55…
G
AI is the new frontier, anything is possible. But we won't know until we try it.…
ytc_UgxeMsRsd…
G
to me, it shouldn't even be called "Ai Art". Ai gen image would be more appropr…
ytc_UgywJqX4J…
G
your still on this? Let it go... I know they are annoying but they will prove th…
ytc_UgxLqljtc…
G
I'm like the only creative digging this A.I art alot of people are just straight…
ytc_UgxvgJE1F…
Comment
No, it's unlikely that any AI, including a robot AI, would "destroy the world" without specific harmful programming or misuse. AI systems are designed to perform specific tasks and operate under strict guidelines set by humans. While the concept of AI causing harm is a popular topic in science fiction, real-world AI systems are generally created with ethical and safety measures in mind. The focus is on developing AI responsibly and ensuring it is used for beneficial purposes.
youtube
AI Moral Status
2024-11-12T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw4q-JNlGT_y_C5O-B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYsoH2PWmA0vmq-4Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_2X5Boy2iMeHMqjR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwghBCdKokLSu-PyX94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyapbBfODkUzW6xexd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx5ZrW__Gn4AyZ3G9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxMv8zQDeLHU6hx5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwfr9JepTpY1E0O2lp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMLTpkNMLsqs_dRJF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZK3SFg_D2KsQ64kR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]