Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI won't end jobs, 3-D printer/replicator automation of building physical things…
ytc_UgzsAj7ZM…
G
If not now, id say EASILY within the next 10 years, or Less, A.I. will lead to U…
ytc_UgyWcKGo4…
G
AI will not be a thing in the near future.. if AI take jobs, who will you pay a…
ytc_UgxLTLfun…
G
The only way I see to use AI properly for art (at least for art meant to be sold…
ytc_UgyjY699R…
G
Billions? 8:20 Give me a couple of millions, and i will develop 100% European ge…
ytc_UgxRwgPl-…
G
There isn't a job on this planet that can't be threatened by AI. Its even in my …
ytr_UgwbLP8CL…
G
At this point, deepfake needs to get banned, not just being heavily regulated bu…
ytr_UgxiEeXq2…
G
I truly enjoyed learning about ChatGPT. It was easy to follow and Charlie gives …
ytc_UgxMjeett…
Comment
As we approach 2030, I think AI development is nearing what is known in the Gartner Hype Cycle as ‘the peak of inflated expectations’ and this interview is a perfect example.
Add another 20-30 years and we might just be on that ‘slope of enlightenment’ (or in this case slope of dread) and what is said here may become more relevant. Not that it is irrelevant but the timeline seems a bit fanciful. I suspect war, social unrest, energy production, pandemics, antibiotic resistence, climate change effects and economic dysfunction will be of more concern over the next ten years. AI may help us on one or two of these🤞
youtube
AI Governance
2025-09-04T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyn7k5h5p5TlWN2crx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAtQSAdClBCT1_2tB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx33zjruhzxftogvbl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxOp9KQk5iOWWUx0Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyo2aAKXEQHJLi40E14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGQ0ERxd6NZ-AjVMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqFHqxgiZRrPtUyJ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5DhW4OeZYSczdPlh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEA6qfAGkPvAaOxEp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKJbwVwUG6fPLT22t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]