Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your telling me the MIT robots are being out classed by a robot made in Russia…
ytc_UgzUuo8T7…
G
Our company uses AI to script and automate data analysis and preparation etc. It…
ytc_UgyTtjTt8…
G
It’s fascinating but we shall see what this produces. We should always explore n…
ytc_UgyPdJ-ia…
G
the girl in red is ai? i thought i finally found my future wife😔 welp, back to m…
ytc_UgwhAvI2D…
G
@TarcilaArts do artists make art because they love making art, putting hours in…
ytr_Ugzh6LYJH…
G
Who is going to refill the fuel tanks? You'll either need to pay someone to be o…
ytc_UgzFulLWl…
G
I don’t think it’s the face that most people would be touching on that particula…
ytc_UgwzXY5JS…
G
As someone with an HPC cluster with 33 GPUs in it, I agree that AI will never re…
ytc_UgzUWJOxL…
Comment
Me: what are the dangers of AI?
ChatGPT: boring answer given about
1. Job displacement
2. Bias and discrimination
3. Privacy and security
4. Autonomous weapons
5. Lack of transparency and accountability
6. Unemployment and socioeconomic impact
Me: you missed out existential threat to kill humans above
ChatGPT:
I apologize for the oversight. One of the concerns sometimes raised regarding artificial intelligence is the concept of an existential threat. This idea suggests a hypothetical scenario where AI becomes extremely advanced and surpasses human intelligence to the extent that it poses a direct threat to humanity.
While this concept has been popularized in science fiction, it is important to note that the notion of AI autonomously deciding to harm or destroy humans is currently speculative and not supported by scientific evidence. The development of AI is guided by human values, and responsible researchers and organizations prioritize safety measures and ethical considerations.
Nonetheless, it is crucial for the AI community to continue researching and implementing safety measures to ensure that AI systems remain beneficial and aligned with human values. The field of AI safety focuses on developing techniques that prevent unintended consequences and promote the safe and reliable use of AI technology.
By taking precautions, establishing ethical frameworks, and maintaining human control and oversight, the potential risks associated with advanced AI can be mitigated, thereby ensuring the safe and beneficial integration of AI into our society.
youtube
AI Governance
2023-05-24T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyGcQCpUU0K7UX9yMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNp0UGOs2IDRd419N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8aghO76KJMhZbw1R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5exZfqrrjjgawTVJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqkMZ6VkLTH6SH3xl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxeauozBIsGpY99-SB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxT5nf6DkmEze5lmBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU-T2vVGDhGcsTox14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx3sxSMyosrqfoHNLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGIRPiLLH1TQ9blQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]