Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look at the people behind Ai. That is enough for me to conclude this technology …
ytc_Ugy4p4LEM…
G
Yeah advancement of ai and robots is also very beneficial. Like it has helped a …
ytr_UgzFxX6gU…
G
I have read that the military is seriously working on this; particularly in reg…
ytc_UgyQsa-3I…
G
"You can accelerate the workflow"
That will ultimately only benefit the rich, no…
ytr_Ugx5R1Lkm…
G
I'm so glad the racists got on the internet early to poison the AI training.…
ytc_Ugzj-E021…
G
Here's a content list of the course:
Introduction (0:00:00)
Search (0:0…
ytc_UgxGmBZYh…
G
12:23 we had those "dame da ne" ai videos that were pretty ahead of there time a…
ytc_Ugwz2iO5p…
G
Looking forward to the day when AI replaced this horrendous host. The worst on L…
ytc_Ugws2ZRLC…
Comment
the problem is you would never be able to make all prople do it ethically. if good people slow down in AI advancement, bad people will win the race and it become even worse. The truth is, whenever the know how of human beings reach a point, capable of doing something, it's not possible to stop it from happening. This is true for nuclear weapons, and will be true with AI, genetic engineering, etc. There's no brake to stop the car from running towards the cliff. This is possibly the natural phenomenon of civilization disappearing and emerging in cycles throughout the history of earth. Life is all about experiencing. So since there's nothing we can do about it, just try your best to prepare for it and in a way enjoy the opportunity to see the world changing in such an unbelievable speed. Even if that means we would be killed by AI in our 50s, it maybe a better life experience than those lived on earth 1000 years ago when they saw the world stay the same (besides wars) throughout their lives and experienced so little in their whole life.
youtube
AI Governance
2025-09-06T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBXLMOC2bzKhbCUEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjBkGK9Kd5Xpo9abJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0Etb1GX8T2ZBvN2R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxyewKVg1FhFY2Ycdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxz4NHDTALApIiH1OB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDMrbGyUh5QLyP7nJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfCDy19lK-ND8bzmB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxq2D92M5xYisXy47x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyG9uRRh2fdhfs0WVJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzbph6_tZCXBVTiBtB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]