Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as a disabled artist its pretty obvious those ai bros dont actually care about d…
ytc_UgyCv52F9…
G
There is a really good reason to not use please and other polite words when prom…
ytc_Ugzx7kTwq…
G
Chatgpt can't reason about anything. This is annoyingly giving openai more credi…
ytc_Ugz-kA7qA…
G
AI does a great job at altering pictures and videos. BUT has it made your life b…
ytc_Ugy1x0_4D…
G
It is only programming. There is no AI. We...destruct ourselves. We want to beli…
ytc_UgwxOmHBz…
G
A Robot driver should be as good as a RALLY driver, than only they could be allo…
ytc_UgzHcj9Zg…
G
@be7256 That's really fascinating, I appreciate the clarification! I'm just slig…
ytr_Ugy7LBxy8…
G
I have saved every ChatGPT conversation. Archives, Select ALL Copy / Paste, Wor…
ytc_Ugx5ijVCr…
Comment
Wow! Elon Musk has seen the 1984-2019 Terminator movies! What a genius! Thank God he is warning us of the potential threat to humanity that AI presents.
Back in 1942, before the term was even coined, the science fiction writer Isaac Asimov wrote The Three Laws of Robotics: A moral code to keep our machines in check. And the three laws of robotics are: a robot may not injure a human being, or through inaction allow a human being to come to harm.
youtube
AI Governance
2023-04-22T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqXrro1BuHgZclS5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyvc9qyXvpo4uuQoSd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm90Cz9ic2BuQANbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1bTssf0RY0H1PJEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxcsfOIvdygXEZXlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJrlVBFipR9PQPldV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6J6rNXfpU8X0dJpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPAxazlT4uef736iF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPhI6BLMLvjUyXZOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH_6XkcCtqaALxZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]