Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I suppose all services and products delivered by AI must be consumed by humans, …
ytc_UgwaZZm6r…
G
I always thought the story of the forbidden fruit, & the tree of knowledge in th…
ytc_UgwTHnU7G…
G
It's funny, listening to the workers complain, when they're the ones who voted f…
ytc_UgxV-GJgj…
G
I don't understand why the marketing and talking points are always about replaci…
ytc_UgyvXBp4d…
G
@meepinandmorpin😮 dude I think you're wrong in naive😮 again AI is being impleme…
ytr_Ugxbwuipz…
G
I hope these were treated with nightshade before posting because if not theyre j…
ytc_Ugxu74jU1…
G
AI is a fucking scam! It cannot do 96% of human jobs! :D More ovet making progra…
ytc_Ugwg5M50U…
G
...why would anyone want this, though? AI is useful for admin tasks, maybe, but …
ytc_Ugy60wCxA…
Comment
my thanks goes to Geoffrey and Stanley for offering us this conversation.
here's one series of thoughts in response:
there are some problems with comparing the role of ai in our lives in today's times with that of having a cute tiger cub as a pet: firstly, you can't get a tiger cub to replace most of the employees in your company so that it gets to do all of their tasks and in the process help you save a lot of money, office space and hr responsibilities. as far as i know, a tiger cub is not interested in doing tasks that humans can do and instead is much more interested in doing tasks that only a tiger can do. so why get ai to do so many tasks that human beings have been doing for centuries in any case?
if you say it is for increased efficiency then i will say you may not have pondered enough whether too much efficiency in life is actually a good thing or a bad thing. for example, as far as i understand, any company that conducts its business more efficiently than is required pretty soon has no business to take care of.
but if you say we 'should' get ai to do a lot of human tasks because we 'can' make ai do a lot of human tasks, then i'm afraid that is as sensible a logic as saying we 'should' make human workers work 90 hours per week because we 'can' make human workers work 90 hours per week.
secondly, once a once-cute tiger cub matures into an adult tiger then it is practically of no use to a human being and vice versa, that is, once a tiger cub matures into an adult tiger then a human being is practically of no use to it (except as a one-time pray animal of course). therefore, under those circumstances the wise thing to do for both parties is to go their separate ways, find their unique niches in the world's ecosystem and settle there, without unnecessarily getting in each other's way.
so why not let ai mature into adulthood and then let it go on its own journey to find its unique ecological niche in this world and settle there, without dragging it back into managing human affairs all over again? what do you think might happen if you go into the forest to locate your now-adult tiger cub just as it was about to take a nap after an appetising lunch, so that you can then drag it back to your home and get it to scare off the neighbour's kids who are not letting you have your own post-lunch nap? do you really believe the now-adult tiger cub will pardon your unexpected intrusion remembering how you had patted it so warmly all those your earlier for doing all the tasks of all those employees you had gotten it to replace in your company?
i rest my case. thanks again.
youtube
AI Governance
2025-06-18T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzFonq1z6noDh9ltcJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwo-tUkkZ-vhzWlEtJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqTctshVP-wcCexnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCNPHT9_LYvFzhl9J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwluiIT7K9CubW6j1R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqoGS1Facpe8NwCI14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFZdL4qEcOLD1AmKN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_lesQ684gQYVHNuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGJg_FzOcHvX6Whbd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxcuViMB61M8Bvx9b54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"}
]