Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@MisterMixxyThanks? Imagine being so terminally online that you consider basic …
ytr_UgyjSg0eW…
G
Let’s be honest, AI is coming for taking those white collar jobs: accountants, f…
ytc_Ugy5RjUMK…
G
Yes, because the whole point of investing trillions of dollars into AI and build…
ytc_Ugxr_eLXq…
G
The notion that advanced technology will create a utopia for the general public …
ytc_Ugw-AZ0p8…
G
I will say one of the eyes is a bit weird but it's def not ai…
ytc_Ugz1TeGnU…
G
I am a behavioural analyst who spent a decade using the largest dataset of human…
ytc_UgxE4qqjL…
G
Yaasss goo girl... And Iam Afraid that AI will store everything and anything and…
ytc_Ugy64dJzN…
G
AI Robots combine machine learning, neuronal networks, linquistics, learning fro…
ytr_UgxyVopkb…
Comment
Okay, I have a major and very practical fear, which no one has mentioned. And whilst I haven’t listened to the entire interview yet, nor can I see this subject mentioned in the chapter headings. I’m commenting to make a specific point (a major fear), but I have to be honest, I wish we hadn’t even had the idea of getting computers to think for themselves. My friends at university in the early 80s were doing it as PhDs. And it sounded an interesting idea. (My career was in trend-spotting and branding.)
So here we are now, seemingly with AI with no concept of the important human qualities of morals or conscience. I totally get the medical and scientific positives and zippy problem solving abilities. But I also see the negative potentials of that shared knowledge and understanding base. Sadly, it all just occurs to me as the modern version of humans looking for cheap labour. And we know how well historically that’s usually worked out. This time, men in suits are looking to get rich by coming up with sexy tech to sell to companies and the public to ‘make life easier’. As usual money and myopic greed. But this time, that basic human drive could get us all cooked. It’ll be bad enough when AI can turn our cars off en masse, so we can’t move, or when it starts firing rockets at nations without our instruction. Yeh, doom, gloom and cynicism, and I don’t have a solution. But bigger than those possibilities, I really do need to draw the AI experts’ attention to an area which needs to be a crucial part of a solution or at least safety valve to stop AI eliminating us at a whim. I’m almost fearful of mentioning this in case AI clocks it and finds it interesting. But if I’ve thought of it, I’m sure AI has worked this out long ago. I’ve lived in 26 cities in dozens of addresses in numerous countries, and all the people I met had one simple, vital thing in common: we’re basic meat sacks, which totally depend on water to survive. So, to eliminate or at least control humans, AI just needs to control the water supply. Most civilisations, societies and communities depend on vast, stored resources of dammed reservoirs for potable, safe drinking water. And they’re all computerised. So all AI needs to do is turn the taps off. I imagine AI finds water pretty useful for its own cooling needs, so even before we became a burden, irritation or rebellion against it, we’d already be competition for that precious resource anyway. Even after searching for bottled water (I’ve had to do it after a huge earthquake), and using rivers, wells and Tain water capture, humans probably would have trouble lasting much beyond a week. I hope someone good with computers has thought of this.
youtube
AI Governance
2025-09-06T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwEvVROQgICGKrI5AB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwl6vQw9-BBL9R2TVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwSSFhP_idXulWKj5R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy57DfOQ-kfZFyghCV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCtJZyqvEWGK3SUZl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwHFR2OtXXjjTyDbzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-1pNaLqhUVHk4_814AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyKBFmqN13_gGVy7h14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaCbx-mA7hynzl25N4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzpWQKjj93gPUIvlhF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]