Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So how do they know that the people who participated in corrupting the algorithm…
ytc_UgwoMAPoG…
G
It's actually smart when they show the news show itself can be replaced by AI. I…
ytc_UgxRgczvn…
G
AI uses too many resources.
We are building a human being substitute, that coul…
ytc_Ugw669t6F…
G
@RaymondTamez
1. You stop seeing from the Christian prism. You have no logical …
ytr_UgyZHf9I5…
G
The definition of irony would be if an AI went rogue and attempted to hurt peopl…
ytc_UgxC8t2VZ…
G
Please consider Mistral uses as well much less compute and is in many (not all) …
ytc_UgzzAZ0r-…
G
Por que no ponen una de mujer al natural...hadta los robot disfrazafos de bellez…
ytc_UgxWziCVj…
G
I'll make my yearly comment now requesting an update to your "Humans Need Not Ap…
ytc_Ugx1thxCE…
Comment
Growing up I loved movies/shows like Ghost in the shell, and always dreamed of living in a future world where man and machine began to merge together and become one. A future where almost anything is possible. I'm 35 now and it seems like it's not only a possibility, but an inevitability that I will see all of this happen within my lifetime. I'm realizing just how much the line is beginning to blur. I don't think humans are ready for an actual AI that may become self aware. At all. But time waits for no man and all that.. Here's hoping that it IS going to be more like Ghost in the Shell, and less Terminator/Ex Machina. 🍻
youtube
AI Governance
2024-04-04T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0O-CQLXFpU5fOhaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzE38gYZJHSrcL1VVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9v4p1qYDUI1choll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpREhCn8rovUo2Oop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1NmYI2gi6bujiTnJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBcw7J-dzxQxV-SqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNRYQKZC35KQf5r2t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyJx9Gqe-bHpXiQWoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy1pqx7b8-3QdFDK8J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcgVrr2LAttvXorXZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]