Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Advance Ai will take all job from human whose good at using Ai Tools. Because yo…
ytr_UgyJCNcF5…
G
$QAI is here now live on BINGX let's deep dive into QuantixAI technology, AI des…
ytc_UgzFez9q0…
G
anyone using AI should be forced to pay royalties to any art and wort the used A…
ytc_UgxlS-w0L…
G
"AI" ... LLMS are garbage. Microsoft is trying to create demand for it by for…
ytc_UgyTY2-ui…
G
It’s what I’m hoping for. Google is scary. Been in power for to long. But then c…
ytr_UgwYmNLzA…
G
The bias of the AI machine has been programmed to twist truth and is going to de…
ytc_UgxafqwvN…
G
They don't need help per se; they need "permission". Both S. Korea and Japan.
A…
rdc_dkzpwyv
G
AI is a reflection of us. It is trained on our collective output. If it’s manipu…
ytc_UgzdJ8BvN…
Comment
I get the feeling that once the AI gets to a point of advancement, it will simply look at humans as destructive and unpredictable. It will probably want to sideline us for the health of the planet and our species. We will become second class citizens subservient to the Advanced AI intelligence. I don't see how something so smart and analytical will be able to view the overall human species as anything but dangerous that needs to be be either controlled or eliminated entirely for the good of it's own survival and that of the planet. Just my predictions but who knows!
youtube
AI Governance
2023-11-02T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz9QA_Z_tDTFWspqeF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLXt7IcKXx1OgWDwl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsPwaRnw4w_6nXNQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwdd2XHFn3vqQD7EiR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEstUayZhCpajDOHh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy8Aw0U0bOfh4rAAgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJRS2gLugovS02taJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwy2vGC9YFMzfX4W954AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRf5aFHZAuXDSzA7l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_MS29LoIkvE3pArt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}
]