Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I support self driving, but I want the vehicles on grade separated rails. (Thoug…
ytc_UgxcHA-ph…
G
For me human art vs ai art is analogous to terrain modelled by a human and terra…
ytc_UgxiMa60z…
G
I don't know if i can consider myself disabled, but I have a bit of tremor in my…
ytc_UgwAPzGmV…
G
ChatGPT is really starting to become human, huh? Just like you can say something…
ytc_UgxCaImx-…
G
when they say things like, "they are not a human...YET", that's cause of concern…
ytc_Ugw7tU8WM…
G
We lived much better lives before AI. I don’t need another relationship and cert…
ytc_Ugwlu9C0p…
G
I hope AI put us People where we belong for destroying forest. Nature, for mater…
ytc_Ugz9IxIir…
G
i remember everyone saying 5 years ago: "Robots can repalce anything, but the on…
ytc_Ugxi6jZa3…
Comment
The threat is inherent to social forces much larger than the career ambitions of one man.
As he said in the video, there is a lot of incentive to develop AI because of the tremendous good it is capable of doing.
If you can develop a technology that can save thousands of lives due to better diagnoses, fewer driving fatalities, etc..., wouldn't it be immoral NOT to develop it?
AI is a paradox. The better it gets at doing good, the greater the potential it has for doing bad.
youtube
AI Governance
2023-05-08T04:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxGvBAbVC-HLtwRr2V4AaABAg.9pKDCP6p1fK9pQqvf1G811","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxGvBAbVC-HLtwRr2V4AaABAg.9pKDCP6p1fK9pSFaIbrSob","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxkUPsWxt4mZRkD-7R4AaABAg.9pJXTEQH7769pKn2s_p_km","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzRg3ZyhVTXmYSmYXV4AaABAg.9pJFEQDZsnB9pLWEDn2-Kf","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzRg3ZyhVTXmYSmYXV4AaABAg.9pJFEQDZsnB9pM_7zukFm8","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwoBoRVbVGUbwNDIxp4AaABAg.9pJBD-0ZgFq9pLiiDFcZ9m","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxSwuFHVYTkLpReRHt4AaABAg.9pJ9wYCwPDs9pMN2fQBzPk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxRiE7b2zMrSQzxYwB4AaABAg.9pJ5mzYw-RZ9pRWFXFdENy","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzPJ2c24bgCYRsXDCd4AaABAg.9pJ4M23t36A9pRVfm7qtQk","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_Ugzq0k1OhdQne81uHOR4AaABAg.9pJ3_x7JZYz9pL-e7_8nfB","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}
]