Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Terminator wasnt AI gone wrong.
And trump has the where with all of a 3rd grader…
ytc_UgzkilYue…
G
For a second I thought the lady or AI was about to say well I don’t like your at…
ytc_UgzSHmM1H…
G
Israel is using AI-Lavender to kill Palestinians, UnitedHealthCare used AI to cr…
ytc_UgwJyUqjr…
G
Already hyped and liked, I’m just commenting to further boost the video for the …
ytc_Ugzu_sZdQ…
G
It could be much worse; All this assumes one single super AI.
Imagine pitching …
ytc_Ugx_jInzT…
G
Right now, the debt ceiling is more of a concern than AI. Even with an agreement…
ytc_Ugxwu_K11…
G
In addition not once did they mention anything about open source. How many token…
ytr_Ugw74p-SZ…
G
Dude Mercedes is far far behind Tesla in self driving.. Start watching Tesla vid…
ytr_UgwmPjX33…
Comment
As an AI language model, I cannot provide personal opinions or beliefs, but I can provide some information on the topic. The dangers of government and AI are different in nature, and it is difficult to compare them. Government can be dangerous if it is corrupt, abusive of its power, or does not act in the best interests of its citizens. On the other hand, AI can be dangerous if it is not developed responsibly, is not properly regulated, or is designed with malicious intent. It is important to carefully consider the potential risks and benefits of both government and AI, and to work towards ensuring that both are used in responsible and ethical ways.
youtube
2023-04-10T08:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxOGHpLUg0X0KTHVxl4AaABAg.9oJ2agl7A5t9oJ5JeELXFd","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgztULRJ0-6soRWlNZZ4AaABAg.9oJ27x2_MIz9oJaRH0RPAF","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgztULRJ0-6soRWlNZZ4AaABAg.9oJ27x2_MIz9oJp3sZ29B1","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz4z21ANRMm6NkDXgd4AaABAg.9nr0GsFMAFT9oJaFcYv_oF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugyrig_hFiaIldygeQV4AaABAg.AIzfcuXnrI0AJ8B3uKPlaJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugz2idSuh4s6s2_LyCN4AaABAg.AIoQ4vAihIdAIp0EnG9a2y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgwqS0noZsW46nhNXuF4AaABAg.AIlRfg3n8zkAIp-zTk9A8K","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxl3eYXwwSF5gdBcGV4AaABAg.AIl-tXdZKNBAIlJnCclzjC","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzk-VAhEelGybLVtQt4AaABAg.AD84WbVQdsrADd3LRF33O7","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzGxYJz6cN5OcgfA8h4AaABAg.AC31gJwNYueAC32Yx7gbSV","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"}
]