Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They literally are not biased anymore. I have run thousands of tests on the late…
ytr_UgzRZT4q3…
G
It’s all about necessity, do we actually need it. Humans will only do what they …
ytc_UgwUThrrN…
G
I've had conversations with the chatbots and it's nothing like the answers you g…
ytc_Ugx7VmztV…
G
And so the divide begins lol. We are in the Matrix/Artificial Intelligence prequ…
rdc_mxgbtbr
G
It must be such an easy life to be an AI ‘Artist’ and get paid for doing LITERAL…
ytc_Ugy3T0Ivf…
G
A.I. is overrated?!? Then how do you explain GPT can answer the same questions N…
ytc_UgwP7llph…
G
Speaking about obesity.. all of these people taking ozempic even when they aren’…
ytc_Ugxt1vsXk…
G
I feel like the biased data set was pushed away too quickly. The issue isn't the…
ytc_UgytPyXTB…
Comment
Here's a measure we could take - Stop developing autonomous vehicles. Just because we can doesn't mean we should. Have we actually stopped to ask why we need self driving cars? For me the advantages are very much minimal and the potential disadvantages huge. There are good reasons we still use pilots, despite autonomous flight now being possible. I understand that it could help a tiny handful of senior Uber execs get even richer by sacking millions of their drivers, but is that really what we want? ..And yes, I accept there will be a few more jobs in engineering and software development, but once the tech is built.. it can be replicated millions of times with minimal upkeep or development (and probably done by robots!) - the maths is just not equal. The whole concept of this tech is driven by the goal of saving rich people money. Scrapping jobs is central to this tech. It won't help the millions of people who drive for a living in every developed country around the world - the postal workers, the UPS guys, the bus drivers, the lorry drivers, the chauffeurs, the amazon guys, the deliveroo guys, the pizza delivery guys and the taxi drivers. It's highly likely such autonomous tech will eventually be misused/misappropriated or weaponised by military outfits around the world. And all tech goes wrong eventually. All tech breaks down. All tech wares out. All tech has a random failure rate. It's alright when that tech is a printer, but when it travels fast or is in some way weaponised, that failure rate has real life or death consequences. I'm all for EV tech. There is a massive need for EV's for so many different reasons - environmental security, global security, sustainability, human health etc. But if you weigh those considerable reasons against 'Mr Uber wanting a bigger pay packet' or me wanting to get home when I'm too pissed to drive, it puts the arguments for autonomy in perspective somewhat! Also I think we need to be a little careful when pointing the finger of blame at a recently deceased woman for crossing the road in the 'wrong place'. What if the victim was a child? What if the victim was mentally vulnerable? What if the victim was a foreign tourist? What if the person was running for her life from an attacker? ...Life is random. Machines need to be able to cope with random without killing someone. Otherwise they are not fit for purpose. There also needs to be accountability for every accident. If I accidentally run someone over and kill them, I will rightly end up in court with serious charges to answer. Engineers and CEO's need to be exposed to the exact same level of personal risk. Not be able to hide behind technical jargon or a wall of highly paid corporate lawyers. ...EV's, yes. Autonomy, no. ....R.I.P. Elaine Herzberg. ...I wish we could say that this will never happen again. But of course we all absolutely, 100% do know that it absolutely will.
youtube
2018-03-21T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxX5YFJI2vgy_Zyhyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwy27zHitEZBoXDeJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx98EBWYallMqPPMvh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxiKooY4rKZoh0SSRt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5G_XUW7ns7-yMxrR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxTD9zju23ZKhg72Ax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzj9kIO3ZzulRYXqOd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwj5yEuavPRmLqR-4R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdJSUrC_0JQqSHhj94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxlsnAQvcFMG9H9p5F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]