Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is why new jobs will come up, AI overseers or AI regulators. They will revi…
ytr_UgztDFOAP…
G
Idk, people's interests change over time. We are still the "masters of our sea" …
ytc_UgwmvgFDH…
G
You bring up an excellent point! The distinction between knowledge and wisdom is…
ytr_UgwhkO-kM…
G
AI sticks lots of things together with in some algorithm or other, but it doesn'…
ytc_UgwQcFQxj…
G
5:58 yes but today is the worst AI will ever be it's just going to get better an…
ytc_UgydqwJMa…
G
Quick trip to chatgpt gave me: "Demis Hassabis, the British co‑founder and CEO o…
ytc_UgxVFdd1p…
G
What makes the situation even worse is without these jobs many of the workers, e…
rdc_d3rt4wg
G
this is not new, we have been making movies about this for the past 100 years -…
ytc_UgwVZRor-…
Comment
I just want to throw this out there because I am seeing way too much AI Fear-Mongering on the internet. Primarily it's to get views/clicks. The reality is that AI is currently only as good as the group of very hard working software and data engineers that are working on the programming and dataset. We are nowhere near the point where AI is a threat to us, and it likely will never be. This good and bad, black and white kind of bipolar thinking we have is not logical when talking about AI. I do not see a threat, and I do not see them surpassing us, either now or in the near term. So sleep tight.
youtube
AI Governance
2024-01-16T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyusyRSIbGmPqjOduN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxooy7jJ2ppDBXXmQl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzfG-V7LrVI9AGIEHx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8sGFgPQqEBsxWjf94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBxm56oqrxHLhvJKB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwIlStxwXgambWSPdh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3KztiB0QHYDObYul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9yb0vxVezDNo4JF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIESpQeNv43hRr7nJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxgcKnWA1OdHNTvDp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}
]