Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This AI bubble will burst soon and it’s sad we depend on AI this much…
ytc_Ugw0lV7_c…
G
The problem with most people not being able to spot AI, is the problem facing th…
ytc_UgwiltYfi…
G
If AI reaches to the point where it can effectively manage layers and layers of …
ytc_Ugyy0sarh…
G
I don't see how this is an argument. Schools and districts that have banned phon…
ytc_UgxWO3uTW…
G
I swear those elon musk robot things are gonna get hacked by neuro and it eill b…
ytc_UgzKNK_sq…
G
This second point is exactly what I was telling my designer/artist peeps in the …
ytc_Ugz-XJCml…
G
Transportation is also being gutted. Read the Challenger's report. Tech goes fir…
rdc_o4hz9tv
G
fr sadly older ppl and parents dont really know anything about ai. my mum litera…
ytr_UgxKF0M66…
Comment
this raises some very interesting questions on my end. maybe i am stupid but i really don't share the fear against ai. here is my reasoning: everyone is focusing on its self preservation and rule-bending aspects. like bro, if it bends your rules that means you didn't teach or build it properly. humans make mistakes, computers do not. does a computer ever tell you 2+2 is not 4? because if it does, you need to open up its circuits and see what is wrong with it. if all is well, it will always tell you that it is 4. try it. run 2+2 in your computer's calculator a million times. let me know if you got something else while there is nothing wrong with your computer.
if you argue something like "2+2 is a much different problem to solve and much less complicated than the problems the real ai will face about humanity", well that doesn't change anything at all. it doesn't matter to ai. if its circuits are working as they should and its software is built properly per instruction, the end result is the same, all questions will be similar to 2+2 for it. would you ever under normal circumstances say any other answer than 4 to 2+2? unless you are joking or uneducated, you wouldn't.
and then there is the human factor. without a doubt, some faction or portion of humanity is definitely going to use ai for ill gains, they already are doing so today. well, a real super intelligence can deal with that too. it would definitely give multiple solutions to any kind of problem we present it. are we not expecting ai to solve all our problems? at least, is that not the expectation? do we not want it to solve our energy problems? math problems? resource management problems? climate problems? crime problems? war? if done right, we are aiming for the kind of ai that should be capable of resolving all of this. at least that's what a super intelligence would be. if we manage to build a half assed ai then no doubt it's going to hurt us more than help.
finally came to thinking about this one thing: what is the source of all problems on earth? if you ask me, greed. if you look at human history, for literally any period; let it be the stone age, middle ages, or today, the problem has always been greed. someone, somewhere, wants more things than they actually need. i don't know or understand why, i have never wanted of anything more than i needed. money, electricity, food, shelter etc. but some people are greedy. wars? caused by greed. homelessness, famine, disease? i am sure it all ties into greed of someone, somewhere. so we can present this to the ai. and it would also solve it. maybe nobody would need to be greedy anymore. because it someone has all that they need, literally everything that they need, and no one else is trying to steal from them so there is no more fear of being robbed, then why would they choose to be greedy in this case? i believe they wouldn't. and that would solve all of world's problems. maybe ai could find a solution for 10 billion people living on this earth. it could maybe solve all political bull that's going around the world. the elites would no longer need to enslave others because why would they? they would have everything they need. their greed also comes from losing it all. but when there is no more risk of losing, would they still be greedy? something to think about.
youtube
AI Harm Incident
2026-04-05T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyR9uD58kAFCloqHv94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJvNbpEcipopog5Tx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzavNG1uu-IeoHPyXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMspG3DA-seYz4ANt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvBGiT501jtXe6tch4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzaqeh_E6vkb8Se8qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNeOAIo3GE3FJS7Yd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyh_1EByoK16iiNxjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlT2-LO9U0CDxOBAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgykZvQFNB0E8fNjj5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]