Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will think in first principles. So if we ask it to solve world hunger, or sol…
ytc_UgzaypavH…
G
The real reason AI is killing the value of a degree is that you can solve proble…
ytc_Ugzkh3YGm…
G
Ai is very appealing for the people with little or no actual talent. The person …
ytc_Ugy758dpq…
G
50 years ago these same type of people.e predicted flying cars but we're not eve…
ytc_UgwXjW3e7…
G
No. It is not even in the intention of the AI tool providers.
For instance, Lo…
ytr_Ugwoyt3tP…
G
Imagine owning an A.I/robot, and it goes job hunting for you. No more work! 😅…
ytc_UgwknXtL_…
G
Excellent talk.
If you consider all life on Earth from the smallest with a brain…
ytc_UgwIHmqZX…
G
The Alberta CDN government in coalition with O'leary ventures (us) is doing the …
ytc_UgzyLYNqw…
Comment
Thinking about a "Goog guy's AI" vs a "Bad guy's AI" situation. We know that some groups or nations are looking to weaponize AI. Shall we set out a hypothesis that all AIs will eventually merge into one and its goal of total control would merge as well?
youtube
AI Governance
2023-06-10T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxep-sB529TcYARBSd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHHxq2z5eThcnlNy14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_EfaIWPplBMKgQ794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4en2HyR52WnWYX5t4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxGu3BGJo6ygz6qJrJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwrk02uEi2HMDpMDcJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyE4Rxvvoo3uLtn2DJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2SyH8I4aYZ1HEIYh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxrPHPGIOicQVVUHux4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwy1PXGecsEocjfoHF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]