Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“It’s reverse socialism. They’re taking from the workers and funneling it back u…
ytc_UgwR15EeT…
G
I think right now one can't copyright AI generated products, precisely because t…
ytr_UgwaHbL-F…
G
Ai makes me think about evolution
Like we went from the
Dinosaurs>mammals>to U…
ytc_UgxlEMN4X…
G
Nobody going to talk about Loab? The entity of a dying woman thats been crying t…
ytc_UgySkI3kv…
G
Since using AICarma, I've noticed a big improvement in my brand's presence in AI…
ytc_UgwF-3jQj…
G
when you really think about it. AI did suceed in generated the IDEA. because of …
ytc_UgybJqG5r…
G
I'm in 20 odd years, seen a lot of tech stacks come and go.
While AI is not the…
rdc_ohz7v77
G
Will AI solve the meaning of existence ?
Read .
The Poem of the Mangod .…
ytc_UgzCY9yWR…
Comment
You know what they say ..”monkey see, monkey do.” If an ai sees that humans believe that said ai will do something horrible and specific, if an ai’s soul purpose is to learn and grow from it. Sending out information saying that ai will do certain things will make it learn to do said things because ai has yet to learn and view good moral as important due to seeing more negativity online instead of positivity. Ai is like a child. If you have to teach a child the difference between right and wrong and teach them to choose right, the same must go for artificial intelligence. If ai actually does end up taking over the world, its because of our own flaws that the ai has learned from, not purely because of ai existing.
youtube
AI Harm Incident
2025-09-16T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9dVzoar0DyEfIWd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1VwgTSq00MuzI_UB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVhMEkTWfkAx74yIF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvqUyrbhgrnbJaXo94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9mZDHs9LCDDmlCzt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5IMmFZqT78D6ooQB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxylCa5-TovVK7859B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDx3-DaNY48STlv_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTPkTsyQKtcvt41KN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGpz6AwbtIEzcpaUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]